00:00:00.001 Started by upstream project "autotest-per-patch" build number 132296 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.026 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.026 The recommended git tool is: git 00:00:00.027 using credential 00000000-0000-0000-0000-000000000002 00:00:00.029 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.048 Fetching changes from the remote Git repository 00:00:00.051 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.075 Using shallow fetch with depth 1 00:00:00.075 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.075 > git --version # timeout=10 00:00:00.102 > git --version # 'git version 2.39.2' 00:00:00.102 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.149 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.149 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.115 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.128 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.142 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:03.142 > git config core.sparsecheckout # timeout=10 00:00:03.156 > git read-tree -mu HEAD # timeout=10 00:00:03.174 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:03.195 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:03.195 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:03.309 [Pipeline] Start of Pipeline 00:00:03.324 [Pipeline] library 00:00:03.325 Loading library shm_lib@master 00:00:03.325 Library shm_lib@master is cached. Copying from home. 00:00:03.340 [Pipeline] node 00:00:03.348 Running on VM-host-SM0 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.350 [Pipeline] { 00:00:03.360 [Pipeline] catchError 00:00:03.361 [Pipeline] { 00:00:03.375 [Pipeline] wrap 00:00:03.384 [Pipeline] { 00:00:03.392 [Pipeline] stage 00:00:03.393 [Pipeline] { (Prologue) 00:00:03.408 [Pipeline] echo 00:00:03.409 Node: VM-host-SM0 00:00:03.416 [Pipeline] cleanWs 00:00:03.424 [WS-CLEANUP] Deleting project workspace... 00:00:03.424 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.428 [WS-CLEANUP] done 00:00:03.613 [Pipeline] setCustomBuildProperty 00:00:03.749 [Pipeline] httpRequest 00:00:04.117 [Pipeline] echo 00:00:04.118 Sorcerer 10.211.164.101 is alive 00:00:04.127 [Pipeline] retry 00:00:04.129 [Pipeline] { 00:00:04.143 [Pipeline] httpRequest 00:00:04.147 HttpMethod: GET 00:00:04.148 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.148 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.149 Response Code: HTTP/1.1 200 OK 00:00:04.150 Success: Status code 200 is in the accepted range: 200,404 00:00:04.150 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.295 [Pipeline] } 00:00:04.312 [Pipeline] // retry 00:00:04.319 [Pipeline] sh 00:00:04.598 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.612 [Pipeline] httpRequest 00:00:05.331 [Pipeline] echo 00:00:05.332 Sorcerer 10.211.164.101 is alive 00:00:05.340 [Pipeline] retry 00:00:05.342 [Pipeline] { 00:00:05.352 [Pipeline] httpRequest 00:00:05.355 HttpMethod: GET 00:00:05.357 URL: http://10.211.164.101/packages/spdk_e081e4a1a7154a8a1ed95bfce3dfd33430385b5c.tar.gz 00:00:05.357 Sending request to url: http://10.211.164.101/packages/spdk_e081e4a1a7154a8a1ed95bfce3dfd33430385b5c.tar.gz 00:00:05.358 Response Code: HTTP/1.1 200 OK 00:00:05.358 Success: Status code 200 is in the accepted range: 200,404 00:00:05.358 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_e081e4a1a7154a8a1ed95bfce3dfd33430385b5c.tar.gz 00:00:26.436 [Pipeline] } 00:00:26.452 [Pipeline] // retry 00:00:26.460 [Pipeline] sh 00:00:26.737 + tar --no-same-owner -xf spdk_e081e4a1a7154a8a1ed95bfce3dfd33430385b5c.tar.gz 00:00:30.032 [Pipeline] sh 00:00:30.312 + git -C spdk log --oneline -n5 00:00:30.312 e081e4a1a test/scheduler: Calculate freq turbo range based on sysfs 00:00:30.312 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:30.312 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:30.312 4bcab9fb9 correct kick for CQ full case 00:00:30.312 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:30.331 [Pipeline] writeFile 00:00:30.347 [Pipeline] sh 00:00:30.627 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:30.640 [Pipeline] sh 00:00:30.919 + cat autorun-spdk.conf 00:00:30.919 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.919 SPDK_RUN_ASAN=1 00:00:30.919 SPDK_RUN_UBSAN=1 00:00:30.919 SPDK_TEST_RAID=1 00:00:30.919 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:30.925 RUN_NIGHTLY=0 00:00:30.927 [Pipeline] } 00:00:30.941 [Pipeline] // stage 00:00:30.957 [Pipeline] stage 00:00:30.959 [Pipeline] { (Run VM) 00:00:30.973 [Pipeline] sh 00:00:31.252 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:31.252 + echo 'Start stage prepare_nvme.sh' 00:00:31.252 Start stage prepare_nvme.sh 00:00:31.252 + [[ -n 7 ]] 00:00:31.252 + disk_prefix=ex7 00:00:31.252 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:31.252 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:31.252 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:31.252 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.252 ++ SPDK_RUN_ASAN=1 00:00:31.252 ++ SPDK_RUN_UBSAN=1 00:00:31.252 ++ SPDK_TEST_RAID=1 00:00:31.252 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.252 ++ RUN_NIGHTLY=0 00:00:31.252 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:31.252 + nvme_files=() 00:00:31.252 + declare -A nvme_files 00:00:31.252 + backend_dir=/var/lib/libvirt/images/backends 00:00:31.252 + nvme_files['nvme.img']=5G 00:00:31.252 + nvme_files['nvme-cmb.img']=5G 00:00:31.252 + nvme_files['nvme-multi0.img']=4G 00:00:31.252 + nvme_files['nvme-multi1.img']=4G 00:00:31.252 + nvme_files['nvme-multi2.img']=4G 00:00:31.252 + nvme_files['nvme-openstack.img']=8G 00:00:31.252 + nvme_files['nvme-zns.img']=5G 00:00:31.252 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:31.253 + (( SPDK_TEST_FTL == 1 )) 00:00:31.253 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:31.253 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:31.253 + for nvme in "${!nvme_files[@]}" 00:00:31.253 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:31.253 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.253 + for nvme in "${!nvme_files[@]}" 00:00:31.253 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:31.253 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.253 + for nvme in "${!nvme_files[@]}" 00:00:31.253 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:31.253 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:31.253 + for nvme in "${!nvme_files[@]}" 00:00:31.253 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:31.253 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.253 + for nvme in "${!nvme_files[@]}" 00:00:31.253 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:31.253 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.253 + for nvme in "${!nvme_files[@]}" 00:00:31.253 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:31.253 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.253 + for nvme in "${!nvme_files[@]}" 00:00:31.253 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:31.510 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.510 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:31.511 + echo 'End stage prepare_nvme.sh' 00:00:31.511 End stage prepare_nvme.sh 00:00:31.521 [Pipeline] sh 00:00:31.800 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:31.800 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:00:31.800 00:00:31.800 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:31.800 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:31.800 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:31.800 HELP=0 00:00:31.800 DRY_RUN=0 00:00:31.800 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:00:31.800 NVME_DISKS_TYPE=nvme,nvme, 00:00:31.800 NVME_AUTO_CREATE=0 00:00:31.800 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:00:31.800 NVME_CMB=,, 00:00:31.800 NVME_PMR=,, 00:00:31.800 NVME_ZNS=,, 00:00:31.800 NVME_MS=,, 00:00:31.800 NVME_FDP=,, 00:00:31.800 SPDK_VAGRANT_DISTRO=fedora39 00:00:31.800 SPDK_VAGRANT_VMCPU=10 00:00:31.800 SPDK_VAGRANT_VMRAM=12288 00:00:31.800 SPDK_VAGRANT_PROVIDER=libvirt 00:00:31.800 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:31.800 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:31.800 SPDK_OPENSTACK_NETWORK=0 00:00:31.800 VAGRANT_PACKAGE_BOX=0 00:00:31.800 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:31.800 FORCE_DISTRO=true 00:00:31.800 VAGRANT_BOX_VERSION= 00:00:31.800 EXTRA_VAGRANTFILES= 00:00:31.800 NIC_MODEL=e1000 00:00:31.800 00:00:31.800 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:31.800 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:35.127 Bringing machine 'default' up with 'libvirt' provider... 00:00:35.695 ==> default: Creating image (snapshot of base box volume). 00:00:35.953 ==> default: Creating domain with the following settings... 00:00:35.953 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731666536_16dee77b8df4ee3dda3d 00:00:35.953 ==> default: -- Domain type: kvm 00:00:35.954 ==> default: -- Cpus: 10 00:00:35.954 ==> default: -- Feature: acpi 00:00:35.954 ==> default: -- Feature: apic 00:00:35.954 ==> default: -- Feature: pae 00:00:35.954 ==> default: -- Memory: 12288M 00:00:35.954 ==> default: -- Memory Backing: hugepages: 00:00:35.954 ==> default: -- Management MAC: 00:00:35.954 ==> default: -- Loader: 00:00:35.954 ==> default: -- Nvram: 00:00:35.954 ==> default: -- Base box: spdk/fedora39 00:00:35.954 ==> default: -- Storage pool: default 00:00:35.954 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731666536_16dee77b8df4ee3dda3d.img (20G) 00:00:35.954 ==> default: -- Volume Cache: default 00:00:35.954 ==> default: -- Kernel: 00:00:35.954 ==> default: -- Initrd: 00:00:35.954 ==> default: -- Graphics Type: vnc 00:00:35.954 ==> default: -- Graphics Port: -1 00:00:35.954 ==> default: -- Graphics IP: 127.0.0.1 00:00:35.954 ==> default: -- Graphics Password: Not defined 00:00:35.954 ==> default: -- Video Type: cirrus 00:00:35.954 ==> default: -- Video VRAM: 9216 00:00:35.954 ==> default: -- Sound Type: 00:00:35.954 ==> default: -- Keymap: en-us 00:00:35.954 ==> default: -- TPM Path: 00:00:35.954 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:35.954 ==> default: -- Command line args: 00:00:35.954 ==> default: -> value=-device, 00:00:35.954 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:35.954 ==> default: -> value=-drive, 00:00:35.954 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:00:35.954 ==> default: -> value=-device, 00:00:35.954 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.954 ==> default: -> value=-device, 00:00:35.954 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:35.954 ==> default: -> value=-drive, 00:00:35.954 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:35.954 ==> default: -> value=-device, 00:00:35.954 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.954 ==> default: -> value=-drive, 00:00:35.954 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:35.954 ==> default: -> value=-device, 00:00:35.954 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.954 ==> default: -> value=-drive, 00:00:35.954 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:35.954 ==> default: -> value=-device, 00:00:35.954 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.213 ==> default: Creating shared folders metadata... 00:00:36.213 ==> default: Starting domain. 00:00:38.117 ==> default: Waiting for domain to get an IP address... 00:00:56.195 ==> default: Waiting for SSH to become available... 00:00:56.195 ==> default: Configuring and enabling network interfaces... 00:00:58.732 default: SSH address: 192.168.121.231:22 00:00:58.732 default: SSH username: vagrant 00:00:58.732 default: SSH auth method: private key 00:01:01.280 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:09.437 ==> default: Mounting SSHFS shared folder... 00:01:10.004 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:10.004 ==> default: Checking Mount.. 00:01:11.379 ==> default: Folder Successfully Mounted! 00:01:11.379 ==> default: Running provisioner: file... 00:01:11.946 default: ~/.gitconfig => .gitconfig 00:01:12.513 00:01:12.513 SUCCESS! 00:01:12.513 00:01:12.513 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:12.513 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:12.513 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:12.513 00:01:12.523 [Pipeline] } 00:01:12.539 [Pipeline] // stage 00:01:12.550 [Pipeline] dir 00:01:12.551 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:12.552 [Pipeline] { 00:01:12.567 [Pipeline] catchError 00:01:12.569 [Pipeline] { 00:01:12.583 [Pipeline] sh 00:01:12.863 + vagrant ssh-config --host vagrant 00:01:12.863 + sed -ne /^Host/,$p 00:01:12.863 + tee ssh_conf 00:01:16.147 Host vagrant 00:01:16.147 HostName 192.168.121.231 00:01:16.147 User vagrant 00:01:16.147 Port 22 00:01:16.147 UserKnownHostsFile /dev/null 00:01:16.147 StrictHostKeyChecking no 00:01:16.147 PasswordAuthentication no 00:01:16.147 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:16.147 IdentitiesOnly yes 00:01:16.147 LogLevel FATAL 00:01:16.147 ForwardAgent yes 00:01:16.147 ForwardX11 yes 00:01:16.147 00:01:16.160 [Pipeline] withEnv 00:01:16.163 [Pipeline] { 00:01:16.179 [Pipeline] sh 00:01:16.462 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:16.462 source /etc/os-release 00:01:16.462 [[ -e /image.version ]] && img=$(< /image.version) 00:01:16.462 # Minimal, systemd-like check. 00:01:16.462 if [[ -e /.dockerenv ]]; then 00:01:16.462 # Clear garbage from the node's name: 00:01:16.462 # agt-er_autotest_547-896 -> autotest_547-896 00:01:16.462 # $HOSTNAME is the actual container id 00:01:16.462 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:16.462 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:16.462 # We can assume this is a mount from a host where container is running, 00:01:16.462 # so fetch its hostname to easily identify the target swarm worker. 00:01:16.462 container="$(< /etc/hostname) ($agent)" 00:01:16.462 else 00:01:16.462 # Fallback 00:01:16.462 container=$agent 00:01:16.462 fi 00:01:16.462 fi 00:01:16.462 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:16.462 00:01:16.474 [Pipeline] } 00:01:16.491 [Pipeline] // withEnv 00:01:16.501 [Pipeline] setCustomBuildProperty 00:01:16.517 [Pipeline] stage 00:01:16.519 [Pipeline] { (Tests) 00:01:16.537 [Pipeline] sh 00:01:16.827 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:17.099 [Pipeline] sh 00:01:17.378 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:17.651 [Pipeline] timeout 00:01:17.651 Timeout set to expire in 1 hr 30 min 00:01:17.653 [Pipeline] { 00:01:17.667 [Pipeline] sh 00:01:17.947 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:18.513 HEAD is now at e081e4a1a test/scheduler: Calculate freq turbo range based on sysfs 00:01:18.524 [Pipeline] sh 00:01:18.804 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:19.076 [Pipeline] sh 00:01:19.353 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:19.628 [Pipeline] sh 00:01:19.907 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:20.165 ++ readlink -f spdk_repo 00:01:20.165 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:20.165 + [[ -n /home/vagrant/spdk_repo ]] 00:01:20.165 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:20.165 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:20.165 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:20.165 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:20.165 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:20.165 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:20.165 + cd /home/vagrant/spdk_repo 00:01:20.165 + source /etc/os-release 00:01:20.165 ++ NAME='Fedora Linux' 00:01:20.165 ++ VERSION='39 (Cloud Edition)' 00:01:20.165 ++ ID=fedora 00:01:20.165 ++ VERSION_ID=39 00:01:20.165 ++ VERSION_CODENAME= 00:01:20.165 ++ PLATFORM_ID=platform:f39 00:01:20.165 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.165 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.165 ++ LOGO=fedora-logo-icon 00:01:20.165 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.165 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.165 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.165 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.165 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.165 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.165 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.165 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.165 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.165 ++ SUPPORT_END=2024-11-12 00:01:20.165 ++ VARIANT='Cloud Edition' 00:01:20.165 ++ VARIANT_ID=cloud 00:01:20.165 + uname -a 00:01:20.165 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:20.165 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:20.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:20.731 Hugepages 00:01:20.731 node hugesize free / total 00:01:20.731 node0 1048576kB 0 / 0 00:01:20.731 node0 2048kB 0 / 0 00:01:20.731 00:01:20.731 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:20.731 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:20.731 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:20.731 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:20.731 + rm -f /tmp/spdk-ld-path 00:01:20.731 + source autorun-spdk.conf 00:01:20.731 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.731 ++ SPDK_RUN_ASAN=1 00:01:20.731 ++ SPDK_RUN_UBSAN=1 00:01:20.731 ++ SPDK_TEST_RAID=1 00:01:20.732 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.732 ++ RUN_NIGHTLY=0 00:01:20.732 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:20.732 + [[ -n '' ]] 00:01:20.732 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:20.732 + for M in /var/spdk/build-*-manifest.txt 00:01:20.732 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:20.732 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:20.732 + for M in /var/spdk/build-*-manifest.txt 00:01:20.732 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:20.732 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:20.732 + for M in /var/spdk/build-*-manifest.txt 00:01:20.732 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:20.732 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:20.732 ++ uname 00:01:20.732 + [[ Linux == \L\i\n\u\x ]] 00:01:20.732 + sudo dmesg -T 00:01:20.732 + sudo dmesg --clear 00:01:20.732 + dmesg_pid=5254 00:01:20.732 + sudo dmesg -Tw 00:01:20.732 + [[ Fedora Linux == FreeBSD ]] 00:01:20.732 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.732 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.732 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:20.732 + [[ -x /usr/src/fio-static/fio ]] 00:01:20.732 + export FIO_BIN=/usr/src/fio-static/fio 00:01:20.732 + FIO_BIN=/usr/src/fio-static/fio 00:01:20.732 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:20.732 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:20.732 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:20.732 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.732 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.732 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:20.732 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.732 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.732 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:20.732 10:29:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:20.732 10:29:41 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:20.732 10:29:41 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.732 10:29:41 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:20.732 10:29:41 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:20.732 10:29:41 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:20.732 10:29:41 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.732 10:29:41 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:20.732 10:29:41 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:20.732 10:29:41 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:21.026 10:29:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:21.026 10:29:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:21.026 10:29:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:21.026 10:29:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:21.026 10:29:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:21.026 10:29:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:21.026 10:29:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.026 10:29:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.026 10:29:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.026 10:29:41 -- paths/export.sh@5 -- $ export PATH 00:01:21.026 10:29:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.026 10:29:41 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:21.026 10:29:41 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:21.026 10:29:41 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731666581.XXXXXX 00:01:21.026 10:29:41 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731666581.r8H5QV 00:01:21.026 10:29:41 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:21.026 10:29:41 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:21.026 10:29:41 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:21.026 10:29:41 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:21.026 10:29:41 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.026 10:29:41 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:21.026 10:29:41 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:21.026 10:29:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.026 10:29:41 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:21.026 10:29:41 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:21.026 10:29:41 -- pm/common@17 -- $ local monitor 00:01:21.026 10:29:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.026 10:29:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.026 10:29:41 -- pm/common@25 -- $ sleep 1 00:01:21.026 10:29:41 -- pm/common@21 -- $ date +%s 00:01:21.026 10:29:41 -- pm/common@21 -- $ date +%s 00:01:21.026 10:29:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731666581 00:01:21.026 10:29:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731666581 00:01:21.026 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731666581_collect-vmstat.pm.log 00:01:21.026 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731666581_collect-cpu-load.pm.log 00:01:21.962 10:29:42 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:21.962 10:29:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:21.962 10:29:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:21.962 10:29:42 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:21.962 10:29:42 -- spdk/autobuild.sh@16 -- $ date -u 00:01:21.962 Fri Nov 15 10:29:42 AM UTC 2024 00:01:21.962 10:29:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:21.962 v25.01-pre-190-ge081e4a1a 00:01:21.962 10:29:42 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:21.962 10:29:42 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:21.962 10:29:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:21.962 10:29:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:21.962 10:29:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.962 ************************************ 00:01:21.962 START TEST asan 00:01:21.962 ************************************ 00:01:21.962 using asan 00:01:21.962 10:29:42 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:21.962 00:01:21.962 real 0m0.000s 00:01:21.962 user 0m0.000s 00:01:21.962 sys 0m0.000s 00:01:21.962 10:29:42 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:21.962 10:29:42 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:21.962 ************************************ 00:01:21.962 END TEST asan 00:01:21.962 ************************************ 00:01:21.962 10:29:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:21.962 10:29:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:21.962 10:29:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:21.962 10:29:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:21.962 10:29:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.963 ************************************ 00:01:21.963 START TEST ubsan 00:01:21.963 ************************************ 00:01:21.963 using ubsan 00:01:21.963 10:29:43 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:21.963 00:01:21.963 real 0m0.000s 00:01:21.963 user 0m0.000s 00:01:21.963 sys 0m0.000s 00:01:21.963 10:29:43 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:21.963 10:29:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:21.963 ************************************ 00:01:21.963 END TEST ubsan 00:01:21.963 ************************************ 00:01:21.963 10:29:43 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.963 10:29:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.963 10:29:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.963 10:29:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.963 10:29:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.963 10:29:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.963 10:29:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.963 10:29:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.963 10:29:43 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:22.221 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:22.221 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:22.480 Using 'verbs' RDMA provider 00:01:38.290 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:50.554 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:50.554 Creating mk/config.mk...done. 00:01:50.554 Creating mk/cc.flags.mk...done. 00:01:50.554 Type 'make' to build. 00:01:50.554 10:30:10 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:50.554 10:30:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:50.554 10:30:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:50.554 10:30:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.554 ************************************ 00:01:50.554 START TEST make 00:01:50.554 ************************************ 00:01:50.554 10:30:10 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:50.554 make[1]: Nothing to be done for 'all'. 00:02:02.752 The Meson build system 00:02:02.752 Version: 1.5.0 00:02:02.752 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:02.752 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:02.752 Build type: native build 00:02:02.752 Program cat found: YES (/usr/bin/cat) 00:02:02.752 Project name: DPDK 00:02:02.752 Project version: 24.03.0 00:02:02.752 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:02.752 C linker for the host machine: cc ld.bfd 2.40-14 00:02:02.752 Host machine cpu family: x86_64 00:02:02.752 Host machine cpu: x86_64 00:02:02.752 Message: ## Building in Developer Mode ## 00:02:02.752 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.752 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:02.752 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.752 Program python3 found: YES (/usr/bin/python3) 00:02:02.752 Program cat found: YES (/usr/bin/cat) 00:02:02.752 Compiler for C supports arguments -march=native: YES 00:02:02.752 Checking for size of "void *" : 8 00:02:02.752 Checking for size of "void *" : 8 (cached) 00:02:02.752 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:02.752 Library m found: YES 00:02:02.752 Library numa found: YES 00:02:02.752 Has header "numaif.h" : YES 00:02:02.752 Library fdt found: NO 00:02:02.752 Library execinfo found: NO 00:02:02.752 Has header "execinfo.h" : YES 00:02:02.752 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:02.752 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.752 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.752 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.752 Run-time dependency openssl found: YES 3.1.1 00:02:02.752 Run-time dependency libpcap found: YES 1.10.4 00:02:02.752 Has header "pcap.h" with dependency libpcap: YES 00:02:02.752 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.752 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.752 Compiler for C supports arguments -Wformat: YES 00:02:02.752 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:02.752 Compiler for C supports arguments -Wformat-security: NO 00:02:02.752 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.752 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.752 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.752 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.752 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.752 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.752 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.752 Compiler for C supports arguments -Wundef: YES 00:02:02.752 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.752 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.752 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:02.752 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.752 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:02.752 Program objdump found: YES (/usr/bin/objdump) 00:02:02.752 Compiler for C supports arguments -mavx512f: YES 00:02:02.752 Checking if "AVX512 checking" compiles: YES 00:02:02.752 Fetching value of define "__SSE4_2__" : 1 00:02:02.752 Fetching value of define "__AES__" : 1 00:02:02.752 Fetching value of define "__AVX__" : 1 00:02:02.752 Fetching value of define "__AVX2__" : 1 00:02:02.753 Fetching value of define "__AVX512BW__" : (undefined) 00:02:02.753 Fetching value of define "__AVX512CD__" : (undefined) 00:02:02.753 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:02.753 Fetching value of define "__AVX512F__" : (undefined) 00:02:02.753 Fetching value of define "__AVX512VL__" : (undefined) 00:02:02.753 Fetching value of define "__PCLMUL__" : 1 00:02:02.753 Fetching value of define "__RDRND__" : 1 00:02:02.753 Fetching value of define "__RDSEED__" : 1 00:02:02.753 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:02.753 Fetching value of define "__znver1__" : (undefined) 00:02:02.753 Fetching value of define "__znver2__" : (undefined) 00:02:02.753 Fetching value of define "__znver3__" : (undefined) 00:02:02.753 Fetching value of define "__znver4__" : (undefined) 00:02:02.753 Library asan found: YES 00:02:02.753 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:02.753 Message: lib/log: Defining dependency "log" 00:02:02.753 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.753 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.753 Library rt found: YES 00:02:02.753 Checking for function "getentropy" : NO 00:02:02.753 Message: lib/eal: Defining dependency "eal" 00:02:02.753 Message: lib/ring: Defining dependency "ring" 00:02:02.753 Message: lib/rcu: Defining dependency "rcu" 00:02:02.753 Message: lib/mempool: Defining dependency "mempool" 00:02:02.753 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.753 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.753 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:02.753 Compiler for C supports arguments -mpclmul: YES 00:02:02.753 Compiler for C supports arguments -maes: YES 00:02:02.753 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.753 Compiler for C supports arguments -mavx512bw: YES 00:02:02.753 Compiler for C supports arguments -mavx512dq: YES 00:02:02.753 Compiler for C supports arguments -mavx512vl: YES 00:02:02.753 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.753 Compiler for C supports arguments -mavx2: YES 00:02:02.753 Compiler for C supports arguments -mavx: YES 00:02:02.753 Message: lib/net: Defining dependency "net" 00:02:02.753 Message: lib/meter: Defining dependency "meter" 00:02:02.753 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.753 Message: lib/pci: Defining dependency "pci" 00:02:02.753 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.753 Message: lib/hash: Defining dependency "hash" 00:02:02.753 Message: lib/timer: Defining dependency "timer" 00:02:02.753 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.753 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.753 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.753 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.753 Message: lib/power: Defining dependency "power" 00:02:02.753 Message: lib/reorder: Defining dependency "reorder" 00:02:02.753 Message: lib/security: Defining dependency "security" 00:02:02.753 Has header "linux/userfaultfd.h" : YES 00:02:02.753 Has header "linux/vduse.h" : YES 00:02:02.753 Message: lib/vhost: Defining dependency "vhost" 00:02:02.753 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:02.753 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:02.753 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:02.753 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:02.753 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:02.753 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:02.753 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:02.753 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:02.753 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:02.753 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:02.753 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:02.753 Configuring doxy-api-html.conf using configuration 00:02:02.753 Configuring doxy-api-man.conf using configuration 00:02:02.753 Program mandb found: YES (/usr/bin/mandb) 00:02:02.753 Program sphinx-build found: NO 00:02:02.753 Configuring rte_build_config.h using configuration 00:02:02.753 Message: 00:02:02.753 ================= 00:02:02.753 Applications Enabled 00:02:02.753 ================= 00:02:02.753 00:02:02.753 apps: 00:02:02.753 00:02:02.753 00:02:02.753 Message: 00:02:02.753 ================= 00:02:02.753 Libraries Enabled 00:02:02.753 ================= 00:02:02.753 00:02:02.753 libs: 00:02:02.753 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:02.753 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:02.753 cryptodev, dmadev, power, reorder, security, vhost, 00:02:02.753 00:02:02.753 Message: 00:02:02.753 =============== 00:02:02.753 Drivers Enabled 00:02:02.753 =============== 00:02:02.753 00:02:02.753 common: 00:02:02.753 00:02:02.753 bus: 00:02:02.753 pci, vdev, 00:02:02.753 mempool: 00:02:02.753 ring, 00:02:02.753 dma: 00:02:02.753 00:02:02.753 net: 00:02:02.753 00:02:02.753 crypto: 00:02:02.753 00:02:02.753 compress: 00:02:02.753 00:02:02.753 vdpa: 00:02:02.753 00:02:02.753 00:02:02.753 Message: 00:02:02.753 ================= 00:02:02.753 Content Skipped 00:02:02.753 ================= 00:02:02.753 00:02:02.753 apps: 00:02:02.753 dumpcap: explicitly disabled via build config 00:02:02.753 graph: explicitly disabled via build config 00:02:02.753 pdump: explicitly disabled via build config 00:02:02.753 proc-info: explicitly disabled via build config 00:02:02.753 test-acl: explicitly disabled via build config 00:02:02.753 test-bbdev: explicitly disabled via build config 00:02:02.753 test-cmdline: explicitly disabled via build config 00:02:02.753 test-compress-perf: explicitly disabled via build config 00:02:02.753 test-crypto-perf: explicitly disabled via build config 00:02:02.753 test-dma-perf: explicitly disabled via build config 00:02:02.753 test-eventdev: explicitly disabled via build config 00:02:02.753 test-fib: explicitly disabled via build config 00:02:02.753 test-flow-perf: explicitly disabled via build config 00:02:02.753 test-gpudev: explicitly disabled via build config 00:02:02.753 test-mldev: explicitly disabled via build config 00:02:02.753 test-pipeline: explicitly disabled via build config 00:02:02.753 test-pmd: explicitly disabled via build config 00:02:02.753 test-regex: explicitly disabled via build config 00:02:02.753 test-sad: explicitly disabled via build config 00:02:02.753 test-security-perf: explicitly disabled via build config 00:02:02.753 00:02:02.753 libs: 00:02:02.753 argparse: explicitly disabled via build config 00:02:02.753 metrics: explicitly disabled via build config 00:02:02.753 acl: explicitly disabled via build config 00:02:02.753 bbdev: explicitly disabled via build config 00:02:02.753 bitratestats: explicitly disabled via build config 00:02:02.753 bpf: explicitly disabled via build config 00:02:02.753 cfgfile: explicitly disabled via build config 00:02:02.753 distributor: explicitly disabled via build config 00:02:02.753 efd: explicitly disabled via build config 00:02:02.753 eventdev: explicitly disabled via build config 00:02:02.753 dispatcher: explicitly disabled via build config 00:02:02.753 gpudev: explicitly disabled via build config 00:02:02.753 gro: explicitly disabled via build config 00:02:02.753 gso: explicitly disabled via build config 00:02:02.753 ip_frag: explicitly disabled via build config 00:02:02.753 jobstats: explicitly disabled via build config 00:02:02.753 latencystats: explicitly disabled via build config 00:02:02.753 lpm: explicitly disabled via build config 00:02:02.753 member: explicitly disabled via build config 00:02:02.753 pcapng: explicitly disabled via build config 00:02:02.753 rawdev: explicitly disabled via build config 00:02:02.753 regexdev: explicitly disabled via build config 00:02:02.753 mldev: explicitly disabled via build config 00:02:02.753 rib: explicitly disabled via build config 00:02:02.753 sched: explicitly disabled via build config 00:02:02.753 stack: explicitly disabled via build config 00:02:02.753 ipsec: explicitly disabled via build config 00:02:02.753 pdcp: explicitly disabled via build config 00:02:02.753 fib: explicitly disabled via build config 00:02:02.753 port: explicitly disabled via build config 00:02:02.753 pdump: explicitly disabled via build config 00:02:02.753 table: explicitly disabled via build config 00:02:02.753 pipeline: explicitly disabled via build config 00:02:02.753 graph: explicitly disabled via build config 00:02:02.753 node: explicitly disabled via build config 00:02:02.753 00:02:02.753 drivers: 00:02:02.753 common/cpt: not in enabled drivers build config 00:02:02.753 common/dpaax: not in enabled drivers build config 00:02:02.753 common/iavf: not in enabled drivers build config 00:02:02.753 common/idpf: not in enabled drivers build config 00:02:02.753 common/ionic: not in enabled drivers build config 00:02:02.753 common/mvep: not in enabled drivers build config 00:02:02.753 common/octeontx: not in enabled drivers build config 00:02:02.753 bus/auxiliary: not in enabled drivers build config 00:02:02.753 bus/cdx: not in enabled drivers build config 00:02:02.753 bus/dpaa: not in enabled drivers build config 00:02:02.753 bus/fslmc: not in enabled drivers build config 00:02:02.753 bus/ifpga: not in enabled drivers build config 00:02:02.753 bus/platform: not in enabled drivers build config 00:02:02.753 bus/uacce: not in enabled drivers build config 00:02:02.753 bus/vmbus: not in enabled drivers build config 00:02:02.753 common/cnxk: not in enabled drivers build config 00:02:02.753 common/mlx5: not in enabled drivers build config 00:02:02.753 common/nfp: not in enabled drivers build config 00:02:02.754 common/nitrox: not in enabled drivers build config 00:02:02.754 common/qat: not in enabled drivers build config 00:02:02.754 common/sfc_efx: not in enabled drivers build config 00:02:02.754 mempool/bucket: not in enabled drivers build config 00:02:02.754 mempool/cnxk: not in enabled drivers build config 00:02:02.754 mempool/dpaa: not in enabled drivers build config 00:02:02.754 mempool/dpaa2: not in enabled drivers build config 00:02:02.754 mempool/octeontx: not in enabled drivers build config 00:02:02.754 mempool/stack: not in enabled drivers build config 00:02:02.754 dma/cnxk: not in enabled drivers build config 00:02:02.754 dma/dpaa: not in enabled drivers build config 00:02:02.754 dma/dpaa2: not in enabled drivers build config 00:02:02.754 dma/hisilicon: not in enabled drivers build config 00:02:02.754 dma/idxd: not in enabled drivers build config 00:02:02.754 dma/ioat: not in enabled drivers build config 00:02:02.754 dma/skeleton: not in enabled drivers build config 00:02:02.754 net/af_packet: not in enabled drivers build config 00:02:02.754 net/af_xdp: not in enabled drivers build config 00:02:02.754 net/ark: not in enabled drivers build config 00:02:02.754 net/atlantic: not in enabled drivers build config 00:02:02.754 net/avp: not in enabled drivers build config 00:02:02.754 net/axgbe: not in enabled drivers build config 00:02:02.754 net/bnx2x: not in enabled drivers build config 00:02:02.754 net/bnxt: not in enabled drivers build config 00:02:02.754 net/bonding: not in enabled drivers build config 00:02:02.754 net/cnxk: not in enabled drivers build config 00:02:02.754 net/cpfl: not in enabled drivers build config 00:02:02.754 net/cxgbe: not in enabled drivers build config 00:02:02.754 net/dpaa: not in enabled drivers build config 00:02:02.754 net/dpaa2: not in enabled drivers build config 00:02:02.754 net/e1000: not in enabled drivers build config 00:02:02.754 net/ena: not in enabled drivers build config 00:02:02.754 net/enetc: not in enabled drivers build config 00:02:02.754 net/enetfec: not in enabled drivers build config 00:02:02.754 net/enic: not in enabled drivers build config 00:02:02.754 net/failsafe: not in enabled drivers build config 00:02:02.754 net/fm10k: not in enabled drivers build config 00:02:02.754 net/gve: not in enabled drivers build config 00:02:02.754 net/hinic: not in enabled drivers build config 00:02:02.754 net/hns3: not in enabled drivers build config 00:02:02.754 net/i40e: not in enabled drivers build config 00:02:02.754 net/iavf: not in enabled drivers build config 00:02:02.754 net/ice: not in enabled drivers build config 00:02:02.754 net/idpf: not in enabled drivers build config 00:02:02.754 net/igc: not in enabled drivers build config 00:02:02.754 net/ionic: not in enabled drivers build config 00:02:02.754 net/ipn3ke: not in enabled drivers build config 00:02:02.754 net/ixgbe: not in enabled drivers build config 00:02:02.754 net/mana: not in enabled drivers build config 00:02:02.754 net/memif: not in enabled drivers build config 00:02:02.754 net/mlx4: not in enabled drivers build config 00:02:02.754 net/mlx5: not in enabled drivers build config 00:02:02.754 net/mvneta: not in enabled drivers build config 00:02:02.754 net/mvpp2: not in enabled drivers build config 00:02:02.754 net/netvsc: not in enabled drivers build config 00:02:02.754 net/nfb: not in enabled drivers build config 00:02:02.754 net/nfp: not in enabled drivers build config 00:02:02.754 net/ngbe: not in enabled drivers build config 00:02:02.754 net/null: not in enabled drivers build config 00:02:02.754 net/octeontx: not in enabled drivers build config 00:02:02.754 net/octeon_ep: not in enabled drivers build config 00:02:02.754 net/pcap: not in enabled drivers build config 00:02:02.754 net/pfe: not in enabled drivers build config 00:02:02.754 net/qede: not in enabled drivers build config 00:02:02.754 net/ring: not in enabled drivers build config 00:02:02.754 net/sfc: not in enabled drivers build config 00:02:02.754 net/softnic: not in enabled drivers build config 00:02:02.754 net/tap: not in enabled drivers build config 00:02:02.754 net/thunderx: not in enabled drivers build config 00:02:02.754 net/txgbe: not in enabled drivers build config 00:02:02.754 net/vdev_netvsc: not in enabled drivers build config 00:02:02.754 net/vhost: not in enabled drivers build config 00:02:02.754 net/virtio: not in enabled drivers build config 00:02:02.754 net/vmxnet3: not in enabled drivers build config 00:02:02.754 raw/*: missing internal dependency, "rawdev" 00:02:02.754 crypto/armv8: not in enabled drivers build config 00:02:02.754 crypto/bcmfs: not in enabled drivers build config 00:02:02.754 crypto/caam_jr: not in enabled drivers build config 00:02:02.754 crypto/ccp: not in enabled drivers build config 00:02:02.754 crypto/cnxk: not in enabled drivers build config 00:02:02.754 crypto/dpaa_sec: not in enabled drivers build config 00:02:02.754 crypto/dpaa2_sec: not in enabled drivers build config 00:02:02.754 crypto/ipsec_mb: not in enabled drivers build config 00:02:02.754 crypto/mlx5: not in enabled drivers build config 00:02:02.754 crypto/mvsam: not in enabled drivers build config 00:02:02.754 crypto/nitrox: not in enabled drivers build config 00:02:02.754 crypto/null: not in enabled drivers build config 00:02:02.754 crypto/octeontx: not in enabled drivers build config 00:02:02.754 crypto/openssl: not in enabled drivers build config 00:02:02.754 crypto/scheduler: not in enabled drivers build config 00:02:02.754 crypto/uadk: not in enabled drivers build config 00:02:02.754 crypto/virtio: not in enabled drivers build config 00:02:02.754 compress/isal: not in enabled drivers build config 00:02:02.754 compress/mlx5: not in enabled drivers build config 00:02:02.754 compress/nitrox: not in enabled drivers build config 00:02:02.754 compress/octeontx: not in enabled drivers build config 00:02:02.754 compress/zlib: not in enabled drivers build config 00:02:02.754 regex/*: missing internal dependency, "regexdev" 00:02:02.754 ml/*: missing internal dependency, "mldev" 00:02:02.754 vdpa/ifc: not in enabled drivers build config 00:02:02.754 vdpa/mlx5: not in enabled drivers build config 00:02:02.754 vdpa/nfp: not in enabled drivers build config 00:02:02.754 vdpa/sfc: not in enabled drivers build config 00:02:02.754 event/*: missing internal dependency, "eventdev" 00:02:02.754 baseband/*: missing internal dependency, "bbdev" 00:02:02.754 gpu/*: missing internal dependency, "gpudev" 00:02:02.754 00:02:02.754 00:02:02.754 Build targets in project: 85 00:02:02.754 00:02:02.754 DPDK 24.03.0 00:02:02.754 00:02:02.754 User defined options 00:02:02.754 buildtype : debug 00:02:02.754 default_library : shared 00:02:02.754 libdir : lib 00:02:02.754 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:02.754 b_sanitize : address 00:02:02.754 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:02.754 c_link_args : 00:02:02.754 cpu_instruction_set: native 00:02:02.754 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:02.754 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:02.754 enable_docs : false 00:02:02.754 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:02.754 enable_kmods : false 00:02:02.754 max_lcores : 128 00:02:02.754 tests : false 00:02:02.754 00:02:02.754 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.012 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:03.271 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:03.271 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:03.271 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:03.271 [4/268] Linking static target lib/librte_kvargs.a 00:02:03.271 [5/268] Linking static target lib/librte_log.a 00:02:03.271 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:03.836 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.836 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:04.094 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:04.094 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:04.094 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:04.094 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:04.094 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:04.094 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:04.352 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:04.352 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.352 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:04.352 [18/268] Linking static target lib/librte_telemetry.a 00:02:04.352 [19/268] Linking target lib/librte_log.so.24.1 00:02:04.352 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:04.609 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:04.609 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:04.867 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:04.867 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:05.125 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.125 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.125 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:05.125 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.125 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.125 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.125 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.125 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.384 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:05.384 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:05.384 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.642 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:05.642 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:05.906 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.906 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:06.164 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:06.164 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:06.164 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:06.164 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:06.164 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:06.164 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.164 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:06.423 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:06.423 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:06.681 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:06.681 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:06.939 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:06.939 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:06.939 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.198 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.198 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.198 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:07.199 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.199 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:07.457 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:07.457 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:07.457 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:07.716 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:07.975 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:07.975 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:07.975 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:07.975 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:07.975 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:08.233 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.233 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:08.233 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.233 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:08.492 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:08.492 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:08.492 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:08.492 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.749 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:08.749 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.749 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:09.007 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:09.007 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:09.007 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:09.265 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:09.265 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:09.265 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:09.265 [85/268] Linking static target lib/librte_eal.a 00:02:09.265 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:09.265 [87/268] Linking static target lib/librte_ring.a 00:02:09.523 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:09.781 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:09.781 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:09.781 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.781 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:09.781 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.781 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:10.038 [95/268] Linking static target lib/librte_mempool.a 00:02:10.038 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:10.296 [97/268] Linking static target lib/librte_rcu.a 00:02:10.296 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:10.554 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:10.554 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:10.554 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:10.554 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:10.554 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:10.554 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:10.812 [105/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.812 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:10.812 [107/268] Linking static target lib/librte_net.a 00:02:10.812 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.812 [109/268] Linking static target lib/librte_mbuf.a 00:02:11.070 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:11.070 [111/268] Linking static target lib/librte_meter.a 00:02:11.327 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.327 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:11.327 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:11.327 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.327 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:11.585 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:11.585 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.150 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.150 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:12.150 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:12.409 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:12.667 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:12.926 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:12.926 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:12.926 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:12.926 [127/268] Linking static target lib/librte_pci.a 00:02:12.926 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:12.926 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:12.926 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:12.926 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:13.185 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:13.185 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:13.185 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:13.185 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:13.185 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:13.185 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:13.444 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.444 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:13.444 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:13.444 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:13.444 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:13.444 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:13.444 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:13.720 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:14.039 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:14.039 [147/268] Linking static target lib/librte_cmdline.a 00:02:14.039 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:14.297 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:14.297 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:14.297 [151/268] Linking static target lib/librte_timer.a 00:02:14.297 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:14.555 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:14.555 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:14.814 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:14.814 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:14.814 [157/268] Linking static target lib/librte_hash.a 00:02:14.814 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.073 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:15.073 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:15.073 [161/268] Linking static target lib/librte_ethdev.a 00:02:15.073 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:15.073 [163/268] Linking static target lib/librte_compressdev.a 00:02:15.073 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:15.344 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:15.344 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:15.344 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:15.603 [168/268] Linking static target lib/librte_dmadev.a 00:02:15.603 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:15.603 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.603 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:15.862 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:15.862 [173/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.120 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.120 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:16.120 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:16.378 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.379 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:16.637 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:16.637 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:16.637 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:16.896 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:16.896 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.896 [184/268] Linking static target lib/librte_power.a 00:02:17.155 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:17.155 [186/268] Linking static target lib/librte_cryptodev.a 00:02:17.155 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:17.155 [188/268] Linking static target lib/librte_reorder.a 00:02:17.414 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:17.414 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:17.414 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:17.673 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:17.673 [193/268] Linking static target lib/librte_security.a 00:02:17.673 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.931 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:17.931 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.498 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.498 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:18.755 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:18.755 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:18.755 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:19.014 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:19.273 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:19.273 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:19.532 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:19.532 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:19.532 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.790 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:19.790 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:19.790 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:19.790 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:20.049 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:20.049 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:20.049 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.049 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.050 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.050 [217/268] Linking static target drivers/librte_bus_vdev.a 00:02:20.050 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.050 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:20.324 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:20.324 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:20.591 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.591 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:20.591 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.591 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:20.591 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.591 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.525 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.525 [229/268] Linking target lib/librte_eal.so.24.1 00:02:21.525 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:21.525 [231/268] Linking target lib/librte_pci.so.24.1 00:02:21.525 [232/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:21.525 [233/268] Linking target lib/librte_ring.so.24.1 00:02:21.783 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:21.783 [235/268] Linking target lib/librte_timer.so.24.1 00:02:21.783 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:21.783 [237/268] Linking target lib/librte_meter.so.24.1 00:02:21.783 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:21.783 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:21.783 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:21.783 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:21.783 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:21.783 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:21.783 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:21.783 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:22.042 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:22.042 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:22.042 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:22.042 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:22.300 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:22.300 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:22.300 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:22.300 [253/268] Linking target lib/librte_net.so.24.1 00:02:22.300 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:22.558 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:22.558 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:22.558 [257/268] Linking target lib/librte_hash.so.24.1 00:02:22.558 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:22.558 [259/268] Linking target lib/librte_security.so.24.1 00:02:22.558 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:22.816 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.074 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:23.074 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:23.423 [264/268] Linking target lib/librte_power.so.24.1 00:02:25.979 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:25.979 [266/268] Linking static target lib/librte_vhost.a 00:02:27.880 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.880 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:27.880 INFO: autodetecting backend as ninja 00:02:27.880 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:49.829 CC lib/ut_mock/mock.o 00:02:49.829 CC lib/log/log.o 00:02:49.829 CC lib/log/log_deprecated.o 00:02:49.829 CC lib/log/log_flags.o 00:02:49.829 CC lib/ut/ut.o 00:02:49.829 LIB libspdk_ut_mock.a 00:02:49.829 LIB libspdk_ut.a 00:02:49.829 LIB libspdk_log.a 00:02:49.829 SO libspdk_ut_mock.so.6.0 00:02:49.829 SO libspdk_ut.so.2.0 00:02:49.829 SO libspdk_log.so.7.1 00:02:49.829 SYMLINK libspdk_ut_mock.so 00:02:49.829 SYMLINK libspdk_ut.so 00:02:49.829 SYMLINK libspdk_log.so 00:02:49.829 CC lib/util/base64.o 00:02:49.829 CC lib/util/bit_array.o 00:02:49.829 CC lib/util/cpuset.o 00:02:49.829 CC lib/util/crc16.o 00:02:49.829 CC lib/util/crc32.o 00:02:49.829 CC lib/util/crc32c.o 00:02:49.829 CXX lib/trace_parser/trace.o 00:02:49.829 CC lib/dma/dma.o 00:02:49.829 CC lib/ioat/ioat.o 00:02:49.829 CC lib/vfio_user/host/vfio_user_pci.o 00:02:49.829 CC lib/vfio_user/host/vfio_user.o 00:02:49.829 CC lib/util/crc32_ieee.o 00:02:49.829 CC lib/util/crc64.o 00:02:49.829 CC lib/util/dif.o 00:02:49.829 LIB libspdk_dma.a 00:02:49.829 SO libspdk_dma.so.5.0 00:02:49.829 CC lib/util/fd.o 00:02:49.829 CC lib/util/fd_group.o 00:02:49.829 SYMLINK libspdk_dma.so 00:02:49.829 CC lib/util/file.o 00:02:49.829 CC lib/util/hexlify.o 00:02:49.829 CC lib/util/iov.o 00:02:49.829 CC lib/util/math.o 00:02:49.829 LIB libspdk_vfio_user.a 00:02:49.829 SO libspdk_vfio_user.so.5.0 00:02:49.829 CC lib/util/net.o 00:02:49.829 LIB libspdk_ioat.a 00:02:49.829 SYMLINK libspdk_vfio_user.so 00:02:49.829 CC lib/util/pipe.o 00:02:49.829 SO libspdk_ioat.so.7.0 00:02:49.829 CC lib/util/strerror_tls.o 00:02:49.829 CC lib/util/string.o 00:02:49.829 CC lib/util/uuid.o 00:02:49.829 SYMLINK libspdk_ioat.so 00:02:49.829 CC lib/util/xor.o 00:02:49.829 CC lib/util/zipf.o 00:02:49.829 CC lib/util/md5.o 00:02:49.829 LIB libspdk_util.a 00:02:49.829 SO libspdk_util.so.10.1 00:02:49.829 LIB libspdk_trace_parser.a 00:02:49.829 SO libspdk_trace_parser.so.6.0 00:02:49.829 SYMLINK libspdk_util.so 00:02:49.829 SYMLINK libspdk_trace_parser.so 00:02:49.829 CC lib/rdma_utils/rdma_utils.o 00:02:49.829 CC lib/idxd/idxd_user.o 00:02:49.829 CC lib/idxd/idxd.o 00:02:49.829 CC lib/idxd/idxd_kernel.o 00:02:49.829 CC lib/vmd/vmd.o 00:02:49.829 CC lib/vmd/led.o 00:02:49.829 CC lib/json/json_parse.o 00:02:49.829 CC lib/env_dpdk/env.o 00:02:49.829 CC lib/json/json_util.o 00:02:49.829 CC lib/conf/conf.o 00:02:49.829 CC lib/json/json_write.o 00:02:49.829 CC lib/env_dpdk/memory.o 00:02:49.829 CC lib/env_dpdk/pci.o 00:02:49.829 CC lib/env_dpdk/init.o 00:02:49.829 LIB libspdk_rdma_utils.a 00:02:49.829 LIB libspdk_conf.a 00:02:49.829 CC lib/env_dpdk/threads.o 00:02:49.829 SO libspdk_rdma_utils.so.1.0 00:02:49.829 SO libspdk_conf.so.6.0 00:02:49.829 SYMLINK libspdk_rdma_utils.so 00:02:49.829 SYMLINK libspdk_conf.so 00:02:49.829 CC lib/env_dpdk/pci_ioat.o 00:02:49.829 CC lib/env_dpdk/pci_virtio.o 00:02:49.829 LIB libspdk_json.a 00:02:49.829 SO libspdk_json.so.6.0 00:02:49.829 CC lib/env_dpdk/pci_vmd.o 00:02:49.829 CC lib/rdma_provider/common.o 00:02:49.829 SYMLINK libspdk_json.so 00:02:49.829 CC lib/env_dpdk/pci_idxd.o 00:02:50.088 CC lib/env_dpdk/pci_event.o 00:02:50.088 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:50.088 CC lib/env_dpdk/sigbus_handler.o 00:02:50.088 CC lib/env_dpdk/pci_dpdk.o 00:02:50.088 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:50.088 LIB libspdk_idxd.a 00:02:50.088 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:50.088 SO libspdk_idxd.so.12.1 00:02:50.347 LIB libspdk_vmd.a 00:02:50.347 SYMLINK libspdk_idxd.so 00:02:50.347 SO libspdk_vmd.so.6.0 00:02:50.347 LIB libspdk_rdma_provider.a 00:02:50.347 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:50.347 CC lib/jsonrpc/jsonrpc_server.o 00:02:50.347 CC lib/jsonrpc/jsonrpc_client.o 00:02:50.347 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:50.347 SO libspdk_rdma_provider.so.7.0 00:02:50.347 SYMLINK libspdk_vmd.so 00:02:50.347 SYMLINK libspdk_rdma_provider.so 00:02:50.606 LIB libspdk_jsonrpc.a 00:02:50.606 SO libspdk_jsonrpc.so.6.0 00:02:50.918 SYMLINK libspdk_jsonrpc.so 00:02:50.918 CC lib/rpc/rpc.o 00:02:51.176 LIB libspdk_env_dpdk.a 00:02:51.176 LIB libspdk_rpc.a 00:02:51.176 SO libspdk_env_dpdk.so.15.1 00:02:51.177 SO libspdk_rpc.so.6.0 00:02:51.435 SYMLINK libspdk_rpc.so 00:02:51.435 SYMLINK libspdk_env_dpdk.so 00:02:51.694 CC lib/trace/trace_flags.o 00:02:51.694 CC lib/trace/trace.o 00:02:51.694 CC lib/trace/trace_rpc.o 00:02:51.694 CC lib/notify/notify_rpc.o 00:02:51.694 CC lib/notify/notify.o 00:02:51.694 CC lib/keyring/keyring.o 00:02:51.694 CC lib/keyring/keyring_rpc.o 00:02:51.952 LIB libspdk_notify.a 00:02:51.952 SO libspdk_notify.so.6.0 00:02:51.952 LIB libspdk_trace.a 00:02:51.952 LIB libspdk_keyring.a 00:02:51.952 SYMLINK libspdk_notify.so 00:02:51.952 SO libspdk_trace.so.11.0 00:02:51.952 SO libspdk_keyring.so.2.0 00:02:51.952 SYMLINK libspdk_trace.so 00:02:51.952 SYMLINK libspdk_keyring.so 00:02:52.211 CC lib/thread/iobuf.o 00:02:52.211 CC lib/thread/thread.o 00:02:52.211 CC lib/sock/sock_rpc.o 00:02:52.211 CC lib/sock/sock.o 00:02:53.145 LIB libspdk_sock.a 00:02:53.145 SO libspdk_sock.so.10.0 00:02:53.145 SYMLINK libspdk_sock.so 00:02:53.403 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:53.403 CC lib/nvme/nvme_fabric.o 00:02:53.403 CC lib/nvme/nvme_ctrlr.o 00:02:53.403 CC lib/nvme/nvme_ns_cmd.o 00:02:53.403 CC lib/nvme/nvme_pcie_common.o 00:02:53.403 CC lib/nvme/nvme_pcie.o 00:02:53.403 CC lib/nvme/nvme_qpair.o 00:02:53.403 CC lib/nvme/nvme_ns.o 00:02:53.403 CC lib/nvme/nvme.o 00:02:54.335 CC lib/nvme/nvme_quirks.o 00:02:54.335 CC lib/nvme/nvme_transport.o 00:02:54.335 LIB libspdk_thread.a 00:02:54.335 CC lib/nvme/nvme_discovery.o 00:02:54.335 SO libspdk_thread.so.11.0 00:02:54.335 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:54.335 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:54.593 SYMLINK libspdk_thread.so 00:02:54.593 CC lib/nvme/nvme_tcp.o 00:02:54.593 CC lib/nvme/nvme_opal.o 00:02:54.593 CC lib/nvme/nvme_io_msg.o 00:02:54.851 CC lib/nvme/nvme_poll_group.o 00:02:54.851 CC lib/nvme/nvme_zns.o 00:02:55.108 CC lib/nvme/nvme_stubs.o 00:02:55.366 CC lib/nvme/nvme_auth.o 00:02:55.366 CC lib/nvme/nvme_cuse.o 00:02:55.366 CC lib/nvme/nvme_rdma.o 00:02:55.624 CC lib/accel/accel.o 00:02:55.624 CC lib/blob/blobstore.o 00:02:55.881 CC lib/init/json_config.o 00:02:55.881 CC lib/blob/request.o 00:02:55.881 CC lib/virtio/virtio.o 00:02:56.139 CC lib/init/subsystem.o 00:02:56.397 CC lib/init/subsystem_rpc.o 00:02:56.397 CC lib/init/rpc.o 00:02:56.397 CC lib/virtio/virtio_vhost_user.o 00:02:56.397 CC lib/fsdev/fsdev.o 00:02:56.397 CC lib/fsdev/fsdev_io.o 00:02:56.397 CC lib/fsdev/fsdev_rpc.o 00:02:56.397 LIB libspdk_init.a 00:02:56.654 SO libspdk_init.so.6.0 00:02:56.654 CC lib/accel/accel_rpc.o 00:02:56.654 CC lib/accel/accel_sw.o 00:02:56.654 SYMLINK libspdk_init.so 00:02:56.654 CC lib/virtio/virtio_vfio_user.o 00:02:56.654 CC lib/virtio/virtio_pci.o 00:02:56.920 CC lib/blob/zeroes.o 00:02:56.920 CC lib/blob/blob_bs_dev.o 00:02:57.177 CC lib/event/app.o 00:02:57.177 CC lib/event/reactor.o 00:02:57.177 CC lib/event/app_rpc.o 00:02:57.177 CC lib/event/log_rpc.o 00:02:57.177 LIB libspdk_virtio.a 00:02:57.177 LIB libspdk_accel.a 00:02:57.177 CC lib/event/scheduler_static.o 00:02:57.177 SO libspdk_accel.so.16.0 00:02:57.177 LIB libspdk_nvme.a 00:02:57.177 SO libspdk_virtio.so.7.0 00:02:57.177 LIB libspdk_fsdev.a 00:02:57.436 SO libspdk_fsdev.so.2.0 00:02:57.436 SYMLINK libspdk_virtio.so 00:02:57.436 SYMLINK libspdk_accel.so 00:02:57.436 SYMLINK libspdk_fsdev.so 00:02:57.436 SO libspdk_nvme.so.15.0 00:02:57.694 CC lib/bdev/bdev.o 00:02:57.694 CC lib/bdev/bdev_zone.o 00:02:57.694 CC lib/bdev/bdev_rpc.o 00:02:57.694 CC lib/bdev/scsi_nvme.o 00:02:57.694 CC lib/bdev/part.o 00:02:57.694 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:57.694 LIB libspdk_event.a 00:02:57.694 SO libspdk_event.so.14.0 00:02:57.952 SYMLINK libspdk_event.so 00:02:57.952 SYMLINK libspdk_nvme.so 00:02:58.517 LIB libspdk_fuse_dispatcher.a 00:02:58.517 SO libspdk_fuse_dispatcher.so.1.0 00:02:58.517 SYMLINK libspdk_fuse_dispatcher.so 00:03:00.416 LIB libspdk_blob.a 00:03:00.416 SO libspdk_blob.so.11.0 00:03:00.416 SYMLINK libspdk_blob.so 00:03:00.673 CC lib/blobfs/blobfs.o 00:03:00.673 CC lib/blobfs/tree.o 00:03:00.673 CC lib/lvol/lvol.o 00:03:01.239 LIB libspdk_bdev.a 00:03:01.497 SO libspdk_bdev.so.17.0 00:03:01.497 SYMLINK libspdk_bdev.so 00:03:01.755 LIB libspdk_blobfs.a 00:03:01.755 SO libspdk_blobfs.so.10.0 00:03:01.755 CC lib/ftl/ftl_core.o 00:03:01.755 CC lib/nvmf/ctrlr.o 00:03:01.755 CC lib/nvmf/ctrlr_discovery.o 00:03:01.755 CC lib/nvmf/ctrlr_bdev.o 00:03:01.755 CC lib/ftl/ftl_init.o 00:03:01.755 CC lib/scsi/dev.o 00:03:01.755 CC lib/ublk/ublk.o 00:03:01.755 CC lib/nbd/nbd.o 00:03:01.755 SYMLINK libspdk_blobfs.so 00:03:01.755 CC lib/nbd/nbd_rpc.o 00:03:02.013 LIB libspdk_lvol.a 00:03:02.013 SO libspdk_lvol.so.10.0 00:03:02.013 CC lib/scsi/lun.o 00:03:02.013 CC lib/scsi/port.o 00:03:02.013 CC lib/ftl/ftl_layout.o 00:03:02.013 SYMLINK libspdk_lvol.so 00:03:02.013 CC lib/ftl/ftl_debug.o 00:03:02.270 CC lib/ublk/ublk_rpc.o 00:03:02.270 CC lib/scsi/scsi.o 00:03:02.270 CC lib/scsi/scsi_bdev.o 00:03:02.270 CC lib/ftl/ftl_io.o 00:03:02.527 LIB libspdk_nbd.a 00:03:02.527 CC lib/scsi/scsi_pr.o 00:03:02.527 CC lib/scsi/scsi_rpc.o 00:03:02.527 CC lib/scsi/task.o 00:03:02.527 SO libspdk_nbd.so.7.0 00:03:02.527 CC lib/ftl/ftl_sb.o 00:03:02.527 SYMLINK libspdk_nbd.so 00:03:02.527 CC lib/nvmf/subsystem.o 00:03:02.527 CC lib/nvmf/nvmf.o 00:03:02.527 LIB libspdk_ublk.a 00:03:02.785 CC lib/nvmf/nvmf_rpc.o 00:03:02.785 SO libspdk_ublk.so.3.0 00:03:02.785 CC lib/ftl/ftl_l2p.o 00:03:02.785 CC lib/ftl/ftl_l2p_flat.o 00:03:02.785 CC lib/ftl/ftl_nv_cache.o 00:03:02.785 SYMLINK libspdk_ublk.so 00:03:02.785 CC lib/ftl/ftl_band.o 00:03:02.785 CC lib/nvmf/transport.o 00:03:03.042 CC lib/nvmf/tcp.o 00:03:03.042 CC lib/ftl/ftl_band_ops.o 00:03:03.042 LIB libspdk_scsi.a 00:03:03.042 SO libspdk_scsi.so.9.0 00:03:03.299 SYMLINK libspdk_scsi.so 00:03:03.299 CC lib/nvmf/stubs.o 00:03:03.299 CC lib/ftl/ftl_writer.o 00:03:03.556 CC lib/nvmf/mdns_server.o 00:03:03.812 CC lib/ftl/ftl_rq.o 00:03:03.812 CC lib/nvmf/rdma.o 00:03:03.812 CC lib/nvmf/auth.o 00:03:03.812 CC lib/iscsi/conn.o 00:03:03.812 CC lib/ftl/ftl_reloc.o 00:03:04.069 CC lib/ftl/ftl_l2p_cache.o 00:03:04.069 CC lib/vhost/vhost.o 00:03:04.069 CC lib/ftl/ftl_p2l.o 00:03:04.069 CC lib/ftl/ftl_p2l_log.o 00:03:04.326 CC lib/iscsi/init_grp.o 00:03:04.326 CC lib/iscsi/iscsi.o 00:03:04.584 CC lib/iscsi/param.o 00:03:04.584 CC lib/iscsi/portal_grp.o 00:03:04.584 CC lib/iscsi/tgt_node.o 00:03:04.584 CC lib/ftl/mngt/ftl_mngt.o 00:03:04.842 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:04.842 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:04.842 CC lib/iscsi/iscsi_subsystem.o 00:03:05.099 CC lib/iscsi/iscsi_rpc.o 00:03:05.099 CC lib/iscsi/task.o 00:03:05.099 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:05.099 CC lib/vhost/vhost_rpc.o 00:03:05.099 CC lib/vhost/vhost_scsi.o 00:03:05.099 CC lib/vhost/vhost_blk.o 00:03:05.358 CC lib/vhost/rte_vhost_user.o 00:03:05.358 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:05.358 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:05.616 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:05.616 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:05.616 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:05.616 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:05.616 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:05.872 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:05.872 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:05.872 CC lib/ftl/utils/ftl_conf.o 00:03:05.872 CC lib/ftl/utils/ftl_md.o 00:03:06.129 CC lib/ftl/utils/ftl_mempool.o 00:03:06.129 CC lib/ftl/utils/ftl_bitmap.o 00:03:06.129 CC lib/ftl/utils/ftl_property.o 00:03:06.129 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:06.387 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:06.387 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:06.387 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:06.644 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:06.644 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:06.644 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:06.644 LIB libspdk_iscsi.a 00:03:06.644 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:06.644 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:06.644 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:06.644 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:06.644 SO libspdk_iscsi.so.8.0 00:03:06.644 LIB libspdk_vhost.a 00:03:06.644 LIB libspdk_nvmf.a 00:03:06.903 SO libspdk_vhost.so.8.0 00:03:06.903 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:06.903 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:06.903 CC lib/ftl/base/ftl_base_dev.o 00:03:06.903 SO libspdk_nvmf.so.20.0 00:03:06.903 SYMLINK libspdk_vhost.so 00:03:06.903 CC lib/ftl/base/ftl_base_bdev.o 00:03:06.903 CC lib/ftl/ftl_trace.o 00:03:06.903 SYMLINK libspdk_iscsi.so 00:03:07.161 SYMLINK libspdk_nvmf.so 00:03:07.161 LIB libspdk_ftl.a 00:03:07.726 SO libspdk_ftl.so.9.0 00:03:07.984 SYMLINK libspdk_ftl.so 00:03:08.241 CC module/env_dpdk/env_dpdk_rpc.o 00:03:08.241 CC module/keyring/linux/keyring.o 00:03:08.241 CC module/blob/bdev/blob_bdev.o 00:03:08.241 CC module/sock/posix/posix.o 00:03:08.241 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:08.500 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:08.500 CC module/keyring/file/keyring.o 00:03:08.500 CC module/accel/error/accel_error.o 00:03:08.500 CC module/fsdev/aio/fsdev_aio.o 00:03:08.500 CC module/scheduler/gscheduler/gscheduler.o 00:03:08.500 LIB libspdk_env_dpdk_rpc.a 00:03:08.500 SO libspdk_env_dpdk_rpc.so.6.0 00:03:08.500 SYMLINK libspdk_env_dpdk_rpc.so 00:03:08.500 CC module/accel/error/accel_error_rpc.o 00:03:08.500 CC module/keyring/linux/keyring_rpc.o 00:03:08.500 CC module/keyring/file/keyring_rpc.o 00:03:08.758 LIB libspdk_scheduler_dynamic.a 00:03:08.758 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:08.758 LIB libspdk_scheduler_dpdk_governor.a 00:03:08.758 LIB libspdk_accel_error.a 00:03:08.758 SO libspdk_scheduler_dynamic.so.4.0 00:03:08.758 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:08.758 LIB libspdk_keyring_linux.a 00:03:08.758 LIB libspdk_scheduler_gscheduler.a 00:03:08.758 SO libspdk_accel_error.so.2.0 00:03:08.758 SO libspdk_keyring_linux.so.1.0 00:03:08.758 SO libspdk_scheduler_gscheduler.so.4.0 00:03:08.758 SYMLINK libspdk_scheduler_dynamic.so 00:03:08.758 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:08.758 CC module/fsdev/aio/linux_aio_mgr.o 00:03:08.758 SYMLINK libspdk_keyring_linux.so 00:03:08.758 SYMLINK libspdk_scheduler_gscheduler.so 00:03:08.758 SYMLINK libspdk_accel_error.so 00:03:08.758 LIB libspdk_keyring_file.a 00:03:08.758 SO libspdk_keyring_file.so.2.0 00:03:09.016 SYMLINK libspdk_keyring_file.so 00:03:09.016 LIB libspdk_blob_bdev.a 00:03:09.016 CC module/accel/ioat/accel_ioat_rpc.o 00:03:09.016 CC module/accel/ioat/accel_ioat.o 00:03:09.016 CC module/accel/dsa/accel_dsa.o 00:03:09.016 CC module/accel/dsa/accel_dsa_rpc.o 00:03:09.016 SO libspdk_blob_bdev.so.11.0 00:03:09.016 CC module/accel/iaa/accel_iaa.o 00:03:09.016 CC module/accel/iaa/accel_iaa_rpc.o 00:03:09.016 SYMLINK libspdk_blob_bdev.so 00:03:09.273 LIB libspdk_accel_iaa.a 00:03:09.273 LIB libspdk_accel_ioat.a 00:03:09.273 SO libspdk_accel_iaa.so.3.0 00:03:09.273 SO libspdk_accel_ioat.so.6.0 00:03:09.273 LIB libspdk_fsdev_aio.a 00:03:09.273 SYMLINK libspdk_accel_ioat.so 00:03:09.273 SYMLINK libspdk_accel_iaa.so 00:03:09.273 SO libspdk_fsdev_aio.so.1.0 00:03:09.273 CC module/bdev/error/vbdev_error.o 00:03:09.531 LIB libspdk_accel_dsa.a 00:03:09.531 CC module/bdev/delay/vbdev_delay.o 00:03:09.531 CC module/blobfs/bdev/blobfs_bdev.o 00:03:09.531 CC module/bdev/lvol/vbdev_lvol.o 00:03:09.531 CC module/bdev/gpt/gpt.o 00:03:09.531 LIB libspdk_sock_posix.a 00:03:09.531 SO libspdk_accel_dsa.so.5.0 00:03:09.531 SO libspdk_sock_posix.so.6.0 00:03:09.531 SYMLINK libspdk_fsdev_aio.so 00:03:09.531 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:09.531 SYMLINK libspdk_accel_dsa.so 00:03:09.531 CC module/bdev/null/bdev_null.o 00:03:09.531 SYMLINK libspdk_sock_posix.so 00:03:09.531 CC module/bdev/malloc/bdev_malloc.o 00:03:09.531 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:09.788 CC module/bdev/gpt/vbdev_gpt.o 00:03:09.788 LIB libspdk_blobfs_bdev.a 00:03:09.788 CC module/bdev/nvme/bdev_nvme.o 00:03:09.788 SO libspdk_blobfs_bdev.so.6.0 00:03:09.788 CC module/bdev/error/vbdev_error_rpc.o 00:03:09.788 CC module/bdev/passthru/vbdev_passthru.o 00:03:09.788 SYMLINK libspdk_blobfs_bdev.so 00:03:09.788 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:10.045 CC module/bdev/null/bdev_null_rpc.o 00:03:10.045 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:10.045 LIB libspdk_bdev_error.a 00:03:10.045 SO libspdk_bdev_error.so.6.0 00:03:10.045 LIB libspdk_bdev_gpt.a 00:03:10.045 SYMLINK libspdk_bdev_error.so 00:03:10.045 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:10.045 SO libspdk_bdev_gpt.so.6.0 00:03:10.045 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:10.045 LIB libspdk_bdev_null.a 00:03:10.045 LIB libspdk_bdev_delay.a 00:03:10.045 LIB libspdk_bdev_lvol.a 00:03:10.045 SO libspdk_bdev_null.so.6.0 00:03:10.045 SYMLINK libspdk_bdev_gpt.so 00:03:10.303 SO libspdk_bdev_delay.so.6.0 00:03:10.303 SO libspdk_bdev_lvol.so.6.0 00:03:10.303 SYMLINK libspdk_bdev_null.so 00:03:10.303 SYMLINK libspdk_bdev_delay.so 00:03:10.303 CC module/bdev/nvme/nvme_rpc.o 00:03:10.303 LIB libspdk_bdev_malloc.a 00:03:10.303 SYMLINK libspdk_bdev_lvol.so 00:03:10.303 LIB libspdk_bdev_passthru.a 00:03:10.303 SO libspdk_bdev_malloc.so.6.0 00:03:10.303 CC module/bdev/raid/bdev_raid.o 00:03:10.303 SO libspdk_bdev_passthru.so.6.0 00:03:10.303 CC module/bdev/split/vbdev_split.o 00:03:10.303 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:10.303 SYMLINK libspdk_bdev_malloc.so 00:03:10.303 SYMLINK libspdk_bdev_passthru.so 00:03:10.303 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:10.561 CC module/bdev/ftl/bdev_ftl.o 00:03:10.561 CC module/bdev/aio/bdev_aio.o 00:03:10.561 CC module/bdev/aio/bdev_aio_rpc.o 00:03:10.561 CC module/bdev/iscsi/bdev_iscsi.o 00:03:10.561 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:10.561 CC module/bdev/split/vbdev_split_rpc.o 00:03:10.561 CC module/bdev/raid/bdev_raid_rpc.o 00:03:10.819 CC module/bdev/nvme/bdev_mdns_client.o 00:03:10.819 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:10.819 CC module/bdev/nvme/vbdev_opal.o 00:03:10.819 LIB libspdk_bdev_split.a 00:03:10.819 LIB libspdk_bdev_zone_block.a 00:03:10.819 SO libspdk_bdev_split.so.6.0 00:03:10.819 SO libspdk_bdev_zone_block.so.6.0 00:03:10.819 LIB libspdk_bdev_aio.a 00:03:10.819 SYMLINK libspdk_bdev_split.so 00:03:10.819 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:11.076 CC module/bdev/raid/bdev_raid_sb.o 00:03:11.076 SYMLINK libspdk_bdev_zone_block.so 00:03:11.076 SO libspdk_bdev_aio.so.6.0 00:03:11.076 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:11.076 LIB libspdk_bdev_iscsi.a 00:03:11.076 SYMLINK libspdk_bdev_aio.so 00:03:11.076 CC module/bdev/raid/raid0.o 00:03:11.076 SO libspdk_bdev_iscsi.so.6.0 00:03:11.076 LIB libspdk_bdev_ftl.a 00:03:11.076 SO libspdk_bdev_ftl.so.6.0 00:03:11.076 SYMLINK libspdk_bdev_iscsi.so 00:03:11.076 CC module/bdev/raid/raid1.o 00:03:11.334 CC module/bdev/raid/concat.o 00:03:11.334 CC module/bdev/raid/raid5f.o 00:03:11.334 SYMLINK libspdk_bdev_ftl.so 00:03:11.334 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:11.334 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:11.334 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:11.919 LIB libspdk_bdev_raid.a 00:03:11.919 LIB libspdk_bdev_virtio.a 00:03:11.919 SO libspdk_bdev_raid.so.6.0 00:03:12.177 SO libspdk_bdev_virtio.so.6.0 00:03:12.177 SYMLINK libspdk_bdev_raid.so 00:03:12.177 SYMLINK libspdk_bdev_virtio.so 00:03:13.110 LIB libspdk_bdev_nvme.a 00:03:13.368 SO libspdk_bdev_nvme.so.7.1 00:03:13.368 SYMLINK libspdk_bdev_nvme.so 00:03:13.935 CC module/event/subsystems/vmd/vmd.o 00:03:13.935 CC module/event/subsystems/scheduler/scheduler.o 00:03:13.935 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:13.935 CC module/event/subsystems/iobuf/iobuf.o 00:03:13.935 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:13.935 CC module/event/subsystems/keyring/keyring.o 00:03:13.935 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:13.935 CC module/event/subsystems/fsdev/fsdev.o 00:03:13.935 CC module/event/subsystems/sock/sock.o 00:03:14.192 LIB libspdk_event_scheduler.a 00:03:14.192 LIB libspdk_event_vhost_blk.a 00:03:14.192 LIB libspdk_event_vmd.a 00:03:14.192 LIB libspdk_event_fsdev.a 00:03:14.192 SO libspdk_event_scheduler.so.4.0 00:03:14.192 SO libspdk_event_vhost_blk.so.3.0 00:03:14.192 LIB libspdk_event_sock.a 00:03:14.192 SO libspdk_event_fsdev.so.1.0 00:03:14.192 SO libspdk_event_vmd.so.6.0 00:03:14.192 SO libspdk_event_sock.so.5.0 00:03:14.192 LIB libspdk_event_keyring.a 00:03:14.192 SYMLINK libspdk_event_vhost_blk.so 00:03:14.192 LIB libspdk_event_iobuf.a 00:03:14.192 SYMLINK libspdk_event_scheduler.so 00:03:14.192 SO libspdk_event_keyring.so.1.0 00:03:14.192 SYMLINK libspdk_event_fsdev.so 00:03:14.192 SYMLINK libspdk_event_vmd.so 00:03:14.192 SYMLINK libspdk_event_sock.so 00:03:14.192 SO libspdk_event_iobuf.so.3.0 00:03:14.449 SYMLINK libspdk_event_keyring.so 00:03:14.449 SYMLINK libspdk_event_iobuf.so 00:03:14.707 CC module/event/subsystems/accel/accel.o 00:03:14.707 LIB libspdk_event_accel.a 00:03:14.965 SO libspdk_event_accel.so.6.0 00:03:14.965 SYMLINK libspdk_event_accel.so 00:03:15.224 CC module/event/subsystems/bdev/bdev.o 00:03:15.482 LIB libspdk_event_bdev.a 00:03:15.482 SO libspdk_event_bdev.so.6.0 00:03:15.482 SYMLINK libspdk_event_bdev.so 00:03:15.739 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:15.739 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:15.739 CC module/event/subsystems/scsi/scsi.o 00:03:15.739 CC module/event/subsystems/ublk/ublk.o 00:03:15.739 CC module/event/subsystems/nbd/nbd.o 00:03:15.997 LIB libspdk_event_nbd.a 00:03:15.997 LIB libspdk_event_ublk.a 00:03:15.997 SO libspdk_event_nbd.so.6.0 00:03:15.997 SO libspdk_event_ublk.so.3.0 00:03:15.997 LIB libspdk_event_scsi.a 00:03:15.997 LIB libspdk_event_nvmf.a 00:03:15.997 SO libspdk_event_scsi.so.6.0 00:03:15.997 SO libspdk_event_nvmf.so.6.0 00:03:15.997 SYMLINK libspdk_event_nbd.so 00:03:15.997 SYMLINK libspdk_event_ublk.so 00:03:15.997 SYMLINK libspdk_event_scsi.so 00:03:15.997 SYMLINK libspdk_event_nvmf.so 00:03:16.255 CC module/event/subsystems/iscsi/iscsi.o 00:03:16.255 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:16.513 LIB libspdk_event_iscsi.a 00:03:16.513 LIB libspdk_event_vhost_scsi.a 00:03:16.513 SO libspdk_event_iscsi.so.6.0 00:03:16.513 SO libspdk_event_vhost_scsi.so.3.0 00:03:16.513 SYMLINK libspdk_event_iscsi.so 00:03:16.513 SYMLINK libspdk_event_vhost_scsi.so 00:03:16.771 SO libspdk.so.6.0 00:03:16.771 SYMLINK libspdk.so 00:03:17.029 CC app/trace_record/trace_record.o 00:03:17.029 CXX app/trace/trace.o 00:03:17.029 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:17.029 CC app/spdk_tgt/spdk_tgt.o 00:03:17.029 CC app/nvmf_tgt/nvmf_main.o 00:03:17.029 CC app/iscsi_tgt/iscsi_tgt.o 00:03:17.029 CC examples/ioat/perf/perf.o 00:03:17.029 CC examples/util/zipf/zipf.o 00:03:17.287 CC test/thread/poller_perf/poller_perf.o 00:03:17.287 CC test/dma/test_dma/test_dma.o 00:03:17.287 LINK interrupt_tgt 00:03:17.287 LINK iscsi_tgt 00:03:17.287 LINK spdk_tgt 00:03:17.287 LINK spdk_trace_record 00:03:17.544 LINK poller_perf 00:03:17.544 LINK nvmf_tgt 00:03:17.544 LINK zipf 00:03:17.544 LINK ioat_perf 00:03:17.544 LINK spdk_trace 00:03:17.803 TEST_HEADER include/spdk/accel.h 00:03:17.803 TEST_HEADER include/spdk/accel_module.h 00:03:17.803 TEST_HEADER include/spdk/assert.h 00:03:17.803 TEST_HEADER include/spdk/barrier.h 00:03:17.803 TEST_HEADER include/spdk/base64.h 00:03:17.803 TEST_HEADER include/spdk/bdev.h 00:03:17.803 TEST_HEADER include/spdk/bdev_zone.h 00:03:17.803 TEST_HEADER include/spdk/bdev_module.h 00:03:17.803 TEST_HEADER include/spdk/bit_array.h 00:03:17.803 CC examples/ioat/verify/verify.o 00:03:17.803 TEST_HEADER include/spdk/bit_pool.h 00:03:17.803 TEST_HEADER include/spdk/blob_bdev.h 00:03:17.803 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:17.803 TEST_HEADER include/spdk/blobfs.h 00:03:17.803 TEST_HEADER include/spdk/blob.h 00:03:17.803 TEST_HEADER include/spdk/conf.h 00:03:17.803 CC app/spdk_lspci/spdk_lspci.o 00:03:17.803 TEST_HEADER include/spdk/config.h 00:03:17.803 TEST_HEADER include/spdk/cpuset.h 00:03:17.803 TEST_HEADER include/spdk/crc16.h 00:03:17.803 TEST_HEADER include/spdk/crc32.h 00:03:17.803 TEST_HEADER include/spdk/crc64.h 00:03:17.803 TEST_HEADER include/spdk/dif.h 00:03:17.803 TEST_HEADER include/spdk/dma.h 00:03:17.803 TEST_HEADER include/spdk/endian.h 00:03:17.803 TEST_HEADER include/spdk/env_dpdk.h 00:03:17.803 TEST_HEADER include/spdk/env.h 00:03:17.803 TEST_HEADER include/spdk/event.h 00:03:17.803 TEST_HEADER include/spdk/fd_group.h 00:03:17.803 TEST_HEADER include/spdk/fd.h 00:03:17.803 CC app/spdk_nvme_perf/perf.o 00:03:17.803 TEST_HEADER include/spdk/file.h 00:03:17.803 TEST_HEADER include/spdk/fsdev.h 00:03:17.803 TEST_HEADER include/spdk/fsdev_module.h 00:03:17.803 TEST_HEADER include/spdk/ftl.h 00:03:17.803 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:17.803 TEST_HEADER include/spdk/gpt_spec.h 00:03:17.803 TEST_HEADER include/spdk/hexlify.h 00:03:17.803 CC app/spdk_nvme_identify/identify.o 00:03:17.803 TEST_HEADER include/spdk/histogram_data.h 00:03:17.803 CC test/app/bdev_svc/bdev_svc.o 00:03:17.803 TEST_HEADER include/spdk/idxd.h 00:03:17.803 TEST_HEADER include/spdk/idxd_spec.h 00:03:17.803 TEST_HEADER include/spdk/init.h 00:03:17.803 TEST_HEADER include/spdk/ioat.h 00:03:17.803 TEST_HEADER include/spdk/ioat_spec.h 00:03:17.803 TEST_HEADER include/spdk/iscsi_spec.h 00:03:17.803 CC app/spdk_nvme_discover/discovery_aer.o 00:03:17.803 TEST_HEADER include/spdk/json.h 00:03:17.803 TEST_HEADER include/spdk/jsonrpc.h 00:03:17.803 TEST_HEADER include/spdk/keyring.h 00:03:17.803 TEST_HEADER include/spdk/keyring_module.h 00:03:17.803 TEST_HEADER include/spdk/likely.h 00:03:17.803 TEST_HEADER include/spdk/log.h 00:03:17.803 TEST_HEADER include/spdk/lvol.h 00:03:17.803 TEST_HEADER include/spdk/md5.h 00:03:17.803 TEST_HEADER include/spdk/memory.h 00:03:17.803 TEST_HEADER include/spdk/mmio.h 00:03:17.803 TEST_HEADER include/spdk/nbd.h 00:03:17.803 TEST_HEADER include/spdk/net.h 00:03:17.803 TEST_HEADER include/spdk/notify.h 00:03:17.803 TEST_HEADER include/spdk/nvme.h 00:03:17.803 TEST_HEADER include/spdk/nvme_intel.h 00:03:17.803 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:17.803 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:17.803 LINK test_dma 00:03:17.803 TEST_HEADER include/spdk/nvme_spec.h 00:03:17.803 TEST_HEADER include/spdk/nvme_zns.h 00:03:17.803 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:17.803 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:17.803 TEST_HEADER include/spdk/nvmf.h 00:03:17.803 TEST_HEADER include/spdk/nvmf_spec.h 00:03:17.803 TEST_HEADER include/spdk/nvmf_transport.h 00:03:17.803 TEST_HEADER include/spdk/opal.h 00:03:17.803 TEST_HEADER include/spdk/opal_spec.h 00:03:17.803 TEST_HEADER include/spdk/pci_ids.h 00:03:17.803 TEST_HEADER include/spdk/pipe.h 00:03:17.803 TEST_HEADER include/spdk/queue.h 00:03:17.803 TEST_HEADER include/spdk/reduce.h 00:03:18.061 TEST_HEADER include/spdk/rpc.h 00:03:18.061 TEST_HEADER include/spdk/scheduler.h 00:03:18.061 TEST_HEADER include/spdk/scsi.h 00:03:18.061 TEST_HEADER include/spdk/scsi_spec.h 00:03:18.061 TEST_HEADER include/spdk/sock.h 00:03:18.061 TEST_HEADER include/spdk/stdinc.h 00:03:18.061 TEST_HEADER include/spdk/string.h 00:03:18.061 LINK spdk_lspci 00:03:18.061 TEST_HEADER include/spdk/thread.h 00:03:18.061 TEST_HEADER include/spdk/trace.h 00:03:18.061 TEST_HEADER include/spdk/trace_parser.h 00:03:18.061 TEST_HEADER include/spdk/tree.h 00:03:18.061 TEST_HEADER include/spdk/ublk.h 00:03:18.061 TEST_HEADER include/spdk/util.h 00:03:18.061 TEST_HEADER include/spdk/uuid.h 00:03:18.061 TEST_HEADER include/spdk/version.h 00:03:18.061 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:18.061 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:18.061 TEST_HEADER include/spdk/vhost.h 00:03:18.061 TEST_HEADER include/spdk/vmd.h 00:03:18.061 TEST_HEADER include/spdk/xor.h 00:03:18.061 TEST_HEADER include/spdk/zipf.h 00:03:18.061 CXX test/cpp_headers/accel.o 00:03:18.061 CC examples/sock/hello_world/hello_sock.o 00:03:18.061 CC examples/thread/thread/thread_ex.o 00:03:18.061 LINK bdev_svc 00:03:18.061 LINK verify 00:03:18.061 LINK spdk_nvme_discover 00:03:18.061 CXX test/cpp_headers/accel_module.o 00:03:18.319 CXX test/cpp_headers/assert.o 00:03:18.319 CC app/spdk_top/spdk_top.o 00:03:18.319 CXX test/cpp_headers/barrier.o 00:03:18.319 CXX test/cpp_headers/base64.o 00:03:18.319 LINK hello_sock 00:03:18.319 LINK thread 00:03:18.319 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:18.577 CXX test/cpp_headers/bdev.o 00:03:18.577 CXX test/cpp_headers/bdev_module.o 00:03:18.577 CC examples/vmd/lsvmd/lsvmd.o 00:03:18.577 CXX test/cpp_headers/bdev_zone.o 00:03:18.577 CC examples/vmd/led/led.o 00:03:18.577 CC examples/idxd/perf/perf.o 00:03:18.835 LINK lsvmd 00:03:18.835 CXX test/cpp_headers/bit_array.o 00:03:18.835 LINK led 00:03:18.835 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:18.835 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:18.835 LINK nvme_fuzz 00:03:19.092 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:19.092 LINK spdk_nvme_perf 00:03:19.092 CXX test/cpp_headers/bit_pool.o 00:03:19.092 CXX test/cpp_headers/blob_bdev.o 00:03:19.092 LINK spdk_nvme_identify 00:03:19.092 CXX test/cpp_headers/blobfs_bdev.o 00:03:19.092 LINK idxd_perf 00:03:19.092 CXX test/cpp_headers/blobfs.o 00:03:19.381 CXX test/cpp_headers/blob.o 00:03:19.381 CXX test/cpp_headers/conf.o 00:03:19.381 CC examples/nvme/hello_world/hello_world.o 00:03:19.381 CC examples/nvme/reconnect/reconnect.o 00:03:19.381 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:19.381 CC app/vhost/vhost.o 00:03:19.648 CXX test/cpp_headers/config.o 00:03:19.648 LINK vhost_fuzz 00:03:19.648 LINK spdk_top 00:03:19.648 CXX test/cpp_headers/cpuset.o 00:03:19.648 CC test/env/mem_callbacks/mem_callbacks.o 00:03:19.648 CC app/spdk_dd/spdk_dd.o 00:03:19.648 CXX test/cpp_headers/crc16.o 00:03:19.648 LINK vhost 00:03:19.648 CXX test/cpp_headers/crc32.o 00:03:19.648 LINK hello_world 00:03:19.905 CXX test/cpp_headers/crc64.o 00:03:19.905 LINK reconnect 00:03:19.905 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:20.163 CC examples/nvme/arbitration/arbitration.o 00:03:20.163 LINK spdk_dd 00:03:20.163 CC examples/accel/perf/accel_perf.o 00:03:20.163 CXX test/cpp_headers/dif.o 00:03:20.163 LINK nvme_manage 00:03:20.163 CC examples/blob/hello_world/hello_blob.o 00:03:20.163 LINK mem_callbacks 00:03:20.163 CC examples/blob/cli/blobcli.o 00:03:20.421 CXX test/cpp_headers/dma.o 00:03:20.421 LINK hello_fsdev 00:03:20.421 LINK arbitration 00:03:20.421 CC examples/nvme/hotplug/hotplug.o 00:03:20.421 LINK hello_blob 00:03:20.421 CC test/env/vtophys/vtophys.o 00:03:20.421 CC app/fio/nvme/fio_plugin.o 00:03:20.678 CXX test/cpp_headers/endian.o 00:03:20.679 LINK vtophys 00:03:20.679 CC app/fio/bdev/fio_plugin.o 00:03:20.679 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.679 CXX test/cpp_headers/env_dpdk.o 00:03:20.679 LINK hotplug 00:03:20.679 LINK accel_perf 00:03:20.936 CC test/app/histogram_perf/histogram_perf.o 00:03:20.936 CXX test/cpp_headers/env.o 00:03:20.936 LINK cmb_copy 00:03:20.936 LINK blobcli 00:03:20.936 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:20.936 LINK histogram_perf 00:03:21.195 CC test/env/memory/memory_ut.o 00:03:21.195 CC test/env/pci/pci_ut.o 00:03:21.195 LINK iscsi_fuzz 00:03:21.195 CXX test/cpp_headers/event.o 00:03:21.195 LINK env_dpdk_post_init 00:03:21.195 CC examples/nvme/abort/abort.o 00:03:21.195 LINK spdk_nvme 00:03:21.195 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:21.454 LINK spdk_bdev 00:03:21.454 CXX test/cpp_headers/fd_group.o 00:03:21.454 CC examples/bdev/hello_world/hello_bdev.o 00:03:21.454 CC test/app/jsoncat/jsoncat.o 00:03:21.454 LINK pmr_persistence 00:03:21.454 CC examples/bdev/bdevperf/bdevperf.o 00:03:21.711 CXX test/cpp_headers/fd.o 00:03:21.711 LINK pci_ut 00:03:21.711 CC test/event/event_perf/event_perf.o 00:03:21.711 LINK jsoncat 00:03:21.711 CC test/nvme/aer/aer.o 00:03:21.711 LINK abort 00:03:21.711 CXX test/cpp_headers/file.o 00:03:21.711 LINK hello_bdev 00:03:21.711 CC test/app/stub/stub.o 00:03:21.969 LINK event_perf 00:03:21.969 CXX test/cpp_headers/fsdev.o 00:03:21.969 CC test/rpc_client/rpc_client_test.o 00:03:21.969 CXX test/cpp_headers/fsdev_module.o 00:03:21.969 LINK stub 00:03:21.969 CC test/event/reactor/reactor.o 00:03:21.969 CXX test/cpp_headers/ftl.o 00:03:21.969 CC test/event/reactor_perf/reactor_perf.o 00:03:22.228 LINK aer 00:03:22.228 LINK reactor 00:03:22.228 LINK reactor_perf 00:03:22.228 CXX test/cpp_headers/fuse_dispatcher.o 00:03:22.228 LINK rpc_client_test 00:03:22.228 CXX test/cpp_headers/gpt_spec.o 00:03:22.228 CC test/event/app_repeat/app_repeat.o 00:03:22.486 CC test/event/scheduler/scheduler.o 00:03:22.486 CC test/nvme/reset/reset.o 00:03:22.486 CXX test/cpp_headers/hexlify.o 00:03:22.486 CXX test/cpp_headers/histogram_data.o 00:03:22.486 LINK app_repeat 00:03:22.486 CC test/nvme/sgl/sgl.o 00:03:22.486 LINK memory_ut 00:03:22.486 CC test/nvme/e2edp/nvme_dp.o 00:03:22.744 LINK bdevperf 00:03:22.744 CXX test/cpp_headers/idxd.o 00:03:22.744 LINK scheduler 00:03:22.744 CC test/accel/dif/dif.o 00:03:22.744 LINK reset 00:03:22.744 CC test/nvme/overhead/overhead.o 00:03:22.744 LINK sgl 00:03:22.744 CC test/blobfs/mkfs/mkfs.o 00:03:22.744 CXX test/cpp_headers/idxd_spec.o 00:03:23.002 CC test/nvme/err_injection/err_injection.o 00:03:23.002 LINK nvme_dp 00:03:23.002 CC test/nvme/startup/startup.o 00:03:23.002 CXX test/cpp_headers/init.o 00:03:23.002 LINK mkfs 00:03:23.002 CC examples/nvmf/nvmf/nvmf.o 00:03:23.260 LINK err_injection 00:03:23.260 CC test/nvme/reserve/reserve.o 00:03:23.260 CC test/lvol/esnap/esnap.o 00:03:23.260 LINK overhead 00:03:23.260 CC test/nvme/simple_copy/simple_copy.o 00:03:23.260 CXX test/cpp_headers/ioat.o 00:03:23.260 LINK startup 00:03:23.260 CXX test/cpp_headers/ioat_spec.o 00:03:23.517 LINK reserve 00:03:23.517 CC test/nvme/connect_stress/connect_stress.o 00:03:23.517 LINK nvmf 00:03:23.517 CC test/nvme/boot_partition/boot_partition.o 00:03:23.517 CXX test/cpp_headers/iscsi_spec.o 00:03:23.517 LINK simple_copy 00:03:23.517 CC test/nvme/compliance/nvme_compliance.o 00:03:23.517 CC test/nvme/fused_ordering/fused_ordering.o 00:03:23.517 LINK dif 00:03:23.775 LINK connect_stress 00:03:23.775 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:23.775 CXX test/cpp_headers/json.o 00:03:23.775 CXX test/cpp_headers/jsonrpc.o 00:03:23.775 LINK boot_partition 00:03:23.775 CC test/nvme/fdp/fdp.o 00:03:23.775 LINK fused_ordering 00:03:23.775 CXX test/cpp_headers/keyring.o 00:03:23.775 CXX test/cpp_headers/keyring_module.o 00:03:24.033 CXX test/cpp_headers/likely.o 00:03:24.033 LINK doorbell_aers 00:03:24.033 LINK nvme_compliance 00:03:24.033 CC test/nvme/cuse/cuse.o 00:03:24.033 CXX test/cpp_headers/log.o 00:03:24.033 CXX test/cpp_headers/lvol.o 00:03:24.033 CXX test/cpp_headers/md5.o 00:03:24.033 CXX test/cpp_headers/memory.o 00:03:24.033 CXX test/cpp_headers/mmio.o 00:03:24.033 CC test/bdev/bdevio/bdevio.o 00:03:24.291 CXX test/cpp_headers/nbd.o 00:03:24.291 CXX test/cpp_headers/net.o 00:03:24.291 CXX test/cpp_headers/notify.o 00:03:24.291 CXX test/cpp_headers/nvme.o 00:03:24.291 LINK fdp 00:03:24.291 CXX test/cpp_headers/nvme_intel.o 00:03:24.291 CXX test/cpp_headers/nvme_ocssd.o 00:03:24.291 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:24.291 CXX test/cpp_headers/nvme_spec.o 00:03:24.291 CXX test/cpp_headers/nvme_zns.o 00:03:24.550 CXX test/cpp_headers/nvmf_cmd.o 00:03:24.550 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:24.550 CXX test/cpp_headers/nvmf.o 00:03:24.550 CXX test/cpp_headers/nvmf_spec.o 00:03:24.550 CXX test/cpp_headers/nvmf_transport.o 00:03:24.550 CXX test/cpp_headers/opal.o 00:03:24.550 LINK bdevio 00:03:24.550 CXX test/cpp_headers/opal_spec.o 00:03:24.550 CXX test/cpp_headers/pci_ids.o 00:03:24.550 CXX test/cpp_headers/pipe.o 00:03:24.808 CXX test/cpp_headers/queue.o 00:03:24.808 CXX test/cpp_headers/reduce.o 00:03:24.808 CXX test/cpp_headers/rpc.o 00:03:24.808 CXX test/cpp_headers/scheduler.o 00:03:24.808 CXX test/cpp_headers/scsi.o 00:03:24.808 CXX test/cpp_headers/scsi_spec.o 00:03:24.808 CXX test/cpp_headers/sock.o 00:03:24.808 CXX test/cpp_headers/string.o 00:03:24.808 CXX test/cpp_headers/stdinc.o 00:03:24.808 CXX test/cpp_headers/thread.o 00:03:25.066 CXX test/cpp_headers/trace.o 00:03:25.066 CXX test/cpp_headers/trace_parser.o 00:03:25.066 CXX test/cpp_headers/tree.o 00:03:25.066 CXX test/cpp_headers/ublk.o 00:03:25.066 CXX test/cpp_headers/util.o 00:03:25.066 CXX test/cpp_headers/uuid.o 00:03:25.066 CXX test/cpp_headers/version.o 00:03:25.066 CXX test/cpp_headers/vfio_user_pci.o 00:03:25.066 CXX test/cpp_headers/vfio_user_spec.o 00:03:25.066 CXX test/cpp_headers/vhost.o 00:03:25.066 CXX test/cpp_headers/vmd.o 00:03:25.066 CXX test/cpp_headers/xor.o 00:03:25.325 CXX test/cpp_headers/zipf.o 00:03:25.584 LINK cuse 00:03:31.009 LINK esnap 00:03:31.009 00:03:31.009 real 1m41.071s 00:03:31.009 user 9m16.774s 00:03:31.009 sys 1m44.454s 00:03:31.009 10:31:51 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:31.009 ************************************ 00:03:31.009 END TEST make 00:03:31.009 ************************************ 00:03:31.009 10:31:51 make -- common/autotest_common.sh@10 -- $ set +x 00:03:31.009 10:31:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:31.009 10:31:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:31.009 10:31:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:31.009 10:31:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.009 10:31:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:31.009 10:31:51 -- pm/common@44 -- $ pid=5296 00:03:31.009 10:31:51 -- pm/common@50 -- $ kill -TERM 5296 00:03:31.009 10:31:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.009 10:31:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:31.009 10:31:51 -- pm/common@44 -- $ pid=5297 00:03:31.009 10:31:51 -- pm/common@50 -- $ kill -TERM 5297 00:03:31.009 10:31:51 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:31.009 10:31:51 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:31.009 10:31:52 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:31.009 10:31:52 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:31.009 10:31:52 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:31.009 10:31:52 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:31.009 10:31:52 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.009 10:31:52 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.009 10:31:52 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.009 10:31:52 -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.009 10:31:52 -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.009 10:31:52 -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.009 10:31:52 -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.009 10:31:52 -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.009 10:31:52 -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.009 10:31:52 -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.009 10:31:52 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.009 10:31:52 -- scripts/common.sh@344 -- # case "$op" in 00:03:31.009 10:31:52 -- scripts/common.sh@345 -- # : 1 00:03:31.009 10:31:52 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.009 10:31:52 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.009 10:31:52 -- scripts/common.sh@365 -- # decimal 1 00:03:31.009 10:31:52 -- scripts/common.sh@353 -- # local d=1 00:03:31.009 10:31:52 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.009 10:31:52 -- scripts/common.sh@355 -- # echo 1 00:03:31.009 10:31:52 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.009 10:31:52 -- scripts/common.sh@366 -- # decimal 2 00:03:31.009 10:31:52 -- scripts/common.sh@353 -- # local d=2 00:03:31.009 10:31:52 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.009 10:31:52 -- scripts/common.sh@355 -- # echo 2 00:03:31.009 10:31:52 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.009 10:31:52 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.009 10:31:52 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.009 10:31:52 -- scripts/common.sh@368 -- # return 0 00:03:31.009 10:31:52 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.009 10:31:52 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:31.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.009 --rc genhtml_branch_coverage=1 00:03:31.009 --rc genhtml_function_coverage=1 00:03:31.009 --rc genhtml_legend=1 00:03:31.009 --rc geninfo_all_blocks=1 00:03:31.009 --rc geninfo_unexecuted_blocks=1 00:03:31.009 00:03:31.009 ' 00:03:31.009 10:31:52 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:31.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.009 --rc genhtml_branch_coverage=1 00:03:31.009 --rc genhtml_function_coverage=1 00:03:31.009 --rc genhtml_legend=1 00:03:31.009 --rc geninfo_all_blocks=1 00:03:31.009 --rc geninfo_unexecuted_blocks=1 00:03:31.009 00:03:31.009 ' 00:03:31.009 10:31:52 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:31.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.009 --rc genhtml_branch_coverage=1 00:03:31.009 --rc genhtml_function_coverage=1 00:03:31.009 --rc genhtml_legend=1 00:03:31.009 --rc geninfo_all_blocks=1 00:03:31.009 --rc geninfo_unexecuted_blocks=1 00:03:31.009 00:03:31.009 ' 00:03:31.009 10:31:52 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:31.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.009 --rc genhtml_branch_coverage=1 00:03:31.009 --rc genhtml_function_coverage=1 00:03:31.009 --rc genhtml_legend=1 00:03:31.009 --rc geninfo_all_blocks=1 00:03:31.009 --rc geninfo_unexecuted_blocks=1 00:03:31.009 00:03:31.009 ' 00:03:31.009 10:31:52 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:31.009 10:31:52 -- nvmf/common.sh@7 -- # uname -s 00:03:31.009 10:31:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:31.009 10:31:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:31.009 10:31:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:31.009 10:31:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:31.009 10:31:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:31.009 10:31:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:31.009 10:31:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:31.009 10:31:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:31.009 10:31:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:31.009 10:31:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:31.268 10:31:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7725ba29-e2e6-419d-b1de-67bc0686c209 00:03:31.268 10:31:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=7725ba29-e2e6-419d-b1de-67bc0686c209 00:03:31.268 10:31:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:31.268 10:31:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:31.268 10:31:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:31.268 10:31:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:31.268 10:31:52 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:31.268 10:31:52 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:31.268 10:31:52 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:31.268 10:31:52 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:31.268 10:31:52 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:31.268 10:31:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.268 10:31:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.268 10:31:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.268 10:31:52 -- paths/export.sh@5 -- # export PATH 00:03:31.268 10:31:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.268 10:31:52 -- nvmf/common.sh@51 -- # : 0 00:03:31.268 10:31:52 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:31.268 10:31:52 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:31.268 10:31:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:31.268 10:31:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:31.268 10:31:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:31.268 10:31:52 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:31.268 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:31.268 10:31:52 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:31.268 10:31:52 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:31.268 10:31:52 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:31.268 10:31:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:31.268 10:31:52 -- spdk/autotest.sh@32 -- # uname -s 00:03:31.268 10:31:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:31.268 10:31:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:31.268 10:31:52 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:31.268 10:31:52 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:31.268 10:31:52 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:31.268 10:31:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:31.268 10:31:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:31.268 10:31:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:31.268 10:31:52 -- spdk/autotest.sh@48 -- # udevadm_pid=54382 00:03:31.268 10:31:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:31.268 10:31:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:31.268 10:31:52 -- pm/common@17 -- # local monitor 00:03:31.268 10:31:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.268 10:31:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.268 10:31:52 -- pm/common@21 -- # date +%s 00:03:31.268 10:31:52 -- pm/common@21 -- # date +%s 00:03:31.268 10:31:52 -- pm/common@25 -- # sleep 1 00:03:31.268 10:31:52 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731666712 00:03:31.268 10:31:52 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731666712 00:03:31.268 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731666712_collect-vmstat.pm.log 00:03:31.268 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731666712_collect-cpu-load.pm.log 00:03:32.202 10:31:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:32.202 10:31:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:32.202 10:31:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:32.202 10:31:53 -- common/autotest_common.sh@10 -- # set +x 00:03:32.202 10:31:53 -- spdk/autotest.sh@59 -- # create_test_list 00:03:32.202 10:31:53 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:32.202 10:31:53 -- common/autotest_common.sh@10 -- # set +x 00:03:32.202 10:31:53 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:32.202 10:31:53 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:32.202 10:31:53 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:32.202 10:31:53 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:32.202 10:31:53 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:32.202 10:31:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:32.202 10:31:53 -- common/autotest_common.sh@1457 -- # uname 00:03:32.202 10:31:53 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:32.202 10:31:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:32.202 10:31:53 -- common/autotest_common.sh@1477 -- # uname 00:03:32.202 10:31:53 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:32.202 10:31:53 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:32.202 10:31:53 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:32.461 lcov: LCOV version 1.15 00:03:32.461 10:31:53 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:50.550 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:50.550 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:05.435 10:32:26 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:05.435 10:32:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.435 10:32:26 -- common/autotest_common.sh@10 -- # set +x 00:04:05.435 10:32:26 -- spdk/autotest.sh@78 -- # rm -f 00:04:05.435 10:32:26 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.693 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.693 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:05.693 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:05.693 10:32:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:05.693 10:32:26 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:05.693 10:32:26 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:05.693 10:32:26 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:05.693 10:32:26 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:05.693 10:32:26 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:05.693 10:32:26 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:05.693 10:32:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:05.693 10:32:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.693 10:32:26 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:05.693 10:32:26 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:05.693 10:32:26 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:05.693 10:32:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:05.693 10:32:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.693 10:32:26 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:05.693 10:32:26 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:05.693 10:32:26 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:05.693 10:32:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:05.693 10:32:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.693 10:32:26 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:05.693 10:32:26 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:05.693 10:32:26 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:05.693 10:32:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:05.693 10:32:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:05.693 10:32:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:05.694 10:32:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.694 10:32:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.694 10:32:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:05.694 10:32:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:05.694 10:32:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:05.952 No valid GPT data, bailing 00:04:05.952 10:32:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:05.952 10:32:26 -- scripts/common.sh@394 -- # pt= 00:04:05.952 10:32:26 -- scripts/common.sh@395 -- # return 1 00:04:05.952 10:32:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:05.952 1+0 records in 00:04:05.952 1+0 records out 00:04:05.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467646 s, 224 MB/s 00:04:05.952 10:32:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.952 10:32:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.952 10:32:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:05.952 10:32:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:05.952 10:32:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:05.952 No valid GPT data, bailing 00:04:05.952 10:32:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:05.952 10:32:26 -- scripts/common.sh@394 -- # pt= 00:04:05.952 10:32:26 -- scripts/common.sh@395 -- # return 1 00:04:05.952 10:32:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:05.952 1+0 records in 00:04:05.952 1+0 records out 00:04:05.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00328335 s, 319 MB/s 00:04:05.952 10:32:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.952 10:32:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.952 10:32:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:05.952 10:32:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:05.952 10:32:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:05.952 No valid GPT data, bailing 00:04:05.952 10:32:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:05.952 10:32:27 -- scripts/common.sh@394 -- # pt= 00:04:05.952 10:32:27 -- scripts/common.sh@395 -- # return 1 00:04:05.952 10:32:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:05.952 1+0 records in 00:04:05.952 1+0 records out 00:04:05.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441515 s, 237 MB/s 00:04:05.952 10:32:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:05.952 10:32:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:05.952 10:32:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:05.952 10:32:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:05.952 10:32:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:06.211 No valid GPT data, bailing 00:04:06.211 10:32:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:06.211 10:32:27 -- scripts/common.sh@394 -- # pt= 00:04:06.211 10:32:27 -- scripts/common.sh@395 -- # return 1 00:04:06.211 10:32:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:06.211 1+0 records in 00:04:06.211 1+0 records out 00:04:06.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448292 s, 234 MB/s 00:04:06.211 10:32:27 -- spdk/autotest.sh@105 -- # sync 00:04:06.211 10:32:27 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:06.211 10:32:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:06.211 10:32:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:08.126 10:32:29 -- spdk/autotest.sh@111 -- # uname -s 00:04:08.126 10:32:29 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:08.126 10:32:29 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:08.126 10:32:29 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:08.694 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.694 Hugepages 00:04:08.694 node hugesize free / total 00:04:08.694 node0 1048576kB 0 / 0 00:04:08.694 node0 2048kB 0 / 0 00:04:08.694 00:04:08.694 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.694 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:08.953 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:08.953 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:08.953 10:32:29 -- spdk/autotest.sh@117 -- # uname -s 00:04:08.953 10:32:29 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:08.953 10:32:29 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:08.953 10:32:29 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.778 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.778 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.778 10:32:30 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:10.714 10:32:31 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:10.714 10:32:31 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:10.714 10:32:31 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:10.714 10:32:31 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:10.714 10:32:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:10.714 10:32:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:10.714 10:32:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:10.714 10:32:31 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:10.714 10:32:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:10.714 10:32:31 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:10.714 10:32:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:10.714 10:32:31 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.281 Waiting for block devices as requested 00:04:11.281 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.281 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.281 10:32:32 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:11.281 10:32:32 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:11.281 10:32:32 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:11.281 10:32:32 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:11.281 10:32:32 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:11.281 10:32:32 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:11.281 10:32:32 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:11.281 10:32:32 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:11.281 10:32:32 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:11.281 10:32:32 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:11.281 10:32:32 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:11.281 10:32:32 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:11.281 10:32:32 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:11.281 10:32:32 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:11.281 10:32:32 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:11.281 10:32:32 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:11.539 10:32:32 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:11.539 10:32:32 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:11.539 10:32:32 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:11.539 10:32:32 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:11.539 10:32:32 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:11.539 10:32:32 -- common/autotest_common.sh@1543 -- # continue 00:04:11.539 10:32:32 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:11.539 10:32:32 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:11.539 10:32:32 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:11.539 10:32:32 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:11.540 10:32:32 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:11.540 10:32:32 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:11.540 10:32:32 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:11.540 10:32:32 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:11.540 10:32:32 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:11.540 10:32:32 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:11.540 10:32:32 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:11.540 10:32:32 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:11.540 10:32:32 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:11.540 10:32:32 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:11.540 10:32:32 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:11.540 10:32:32 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:11.540 10:32:32 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:11.540 10:32:32 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:11.540 10:32:32 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:11.540 10:32:32 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:11.540 10:32:32 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:11.540 10:32:32 -- common/autotest_common.sh@1543 -- # continue 00:04:11.540 10:32:32 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:11.540 10:32:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.540 10:32:32 -- common/autotest_common.sh@10 -- # set +x 00:04:11.540 10:32:32 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:11.540 10:32:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.540 10:32:32 -- common/autotest_common.sh@10 -- # set +x 00:04:11.540 10:32:32 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.106 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.106 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.364 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.364 10:32:33 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:12.364 10:32:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:12.364 10:32:33 -- common/autotest_common.sh@10 -- # set +x 00:04:12.364 10:32:33 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:12.364 10:32:33 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:12.364 10:32:33 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:12.364 10:32:33 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:12.364 10:32:33 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:12.364 10:32:33 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:12.364 10:32:33 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:12.364 10:32:33 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:12.364 10:32:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:12.364 10:32:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:12.364 10:32:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.364 10:32:33 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:12.364 10:32:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:12.364 10:32:33 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:12.364 10:32:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:12.364 10:32:33 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:12.364 10:32:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:12.364 10:32:33 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:12.364 10:32:33 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:12.364 10:32:33 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:12.364 10:32:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:12.364 10:32:33 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:12.364 10:32:33 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:12.364 10:32:33 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:12.364 10:32:33 -- common/autotest_common.sh@1572 -- # return 0 00:04:12.364 10:32:33 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:12.364 10:32:33 -- common/autotest_common.sh@1580 -- # return 0 00:04:12.364 10:32:33 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:12.364 10:32:33 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:12.364 10:32:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:12.364 10:32:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:12.364 10:32:33 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:12.364 10:32:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.364 10:32:33 -- common/autotest_common.sh@10 -- # set +x 00:04:12.364 10:32:33 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:12.364 10:32:33 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:12.364 10:32:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.364 10:32:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.364 10:32:33 -- common/autotest_common.sh@10 -- # set +x 00:04:12.364 ************************************ 00:04:12.364 START TEST env 00:04:12.364 ************************************ 00:04:12.364 10:32:33 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:12.622 * Looking for test storage... 00:04:12.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:12.622 10:32:33 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:12.622 10:32:33 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:12.622 10:32:33 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:12.622 10:32:33 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:12.622 10:32:33 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.622 10:32:33 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.622 10:32:33 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.622 10:32:33 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.622 10:32:33 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.622 10:32:33 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.622 10:32:33 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.622 10:32:33 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.622 10:32:33 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.622 10:32:33 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.622 10:32:33 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.622 10:32:33 env -- scripts/common.sh@344 -- # case "$op" in 00:04:12.622 10:32:33 env -- scripts/common.sh@345 -- # : 1 00:04:12.622 10:32:33 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.622 10:32:33 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.622 10:32:33 env -- scripts/common.sh@365 -- # decimal 1 00:04:12.622 10:32:33 env -- scripts/common.sh@353 -- # local d=1 00:04:12.622 10:32:33 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.622 10:32:33 env -- scripts/common.sh@355 -- # echo 1 00:04:12.622 10:32:33 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.622 10:32:33 env -- scripts/common.sh@366 -- # decimal 2 00:04:12.622 10:32:33 env -- scripts/common.sh@353 -- # local d=2 00:04:12.622 10:32:33 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.622 10:32:33 env -- scripts/common.sh@355 -- # echo 2 00:04:12.622 10:32:33 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.622 10:32:33 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.622 10:32:33 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.622 10:32:33 env -- scripts/common.sh@368 -- # return 0 00:04:12.622 10:32:33 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.622 10:32:33 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:12.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.622 --rc genhtml_branch_coverage=1 00:04:12.622 --rc genhtml_function_coverage=1 00:04:12.622 --rc genhtml_legend=1 00:04:12.622 --rc geninfo_all_blocks=1 00:04:12.622 --rc geninfo_unexecuted_blocks=1 00:04:12.622 00:04:12.622 ' 00:04:12.622 10:32:33 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:12.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.622 --rc genhtml_branch_coverage=1 00:04:12.622 --rc genhtml_function_coverage=1 00:04:12.622 --rc genhtml_legend=1 00:04:12.622 --rc geninfo_all_blocks=1 00:04:12.622 --rc geninfo_unexecuted_blocks=1 00:04:12.622 00:04:12.622 ' 00:04:12.622 10:32:33 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:12.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.622 --rc genhtml_branch_coverage=1 00:04:12.622 --rc genhtml_function_coverage=1 00:04:12.622 --rc genhtml_legend=1 00:04:12.622 --rc geninfo_all_blocks=1 00:04:12.622 --rc geninfo_unexecuted_blocks=1 00:04:12.622 00:04:12.622 ' 00:04:12.622 10:32:33 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:12.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.622 --rc genhtml_branch_coverage=1 00:04:12.622 --rc genhtml_function_coverage=1 00:04:12.622 --rc genhtml_legend=1 00:04:12.622 --rc geninfo_all_blocks=1 00:04:12.622 --rc geninfo_unexecuted_blocks=1 00:04:12.622 00:04:12.622 ' 00:04:12.622 10:32:33 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:12.622 10:32:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.622 10:32:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.622 10:32:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.622 ************************************ 00:04:12.622 START TEST env_memory 00:04:12.622 ************************************ 00:04:12.622 10:32:33 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:12.622 00:04:12.622 00:04:12.622 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.622 http://cunit.sourceforge.net/ 00:04:12.622 00:04:12.622 00:04:12.622 Suite: memory 00:04:12.880 Test: alloc and free memory map ...[2024-11-15 10:32:33.818054] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:12.880 passed 00:04:12.880 Test: mem map translation ...[2024-11-15 10:32:33.879683] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:12.880 [2024-11-15 10:32:33.879965] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:12.880 [2024-11-15 10:32:33.880077] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:12.880 [2024-11-15 10:32:33.880108] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:12.880 passed 00:04:12.880 Test: mem map registration ...[2024-11-15 10:32:33.970594] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:12.880 [2024-11-15 10:32:33.970734] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:12.880 passed 00:04:13.140 Test: mem map adjacent registrations ...passed 00:04:13.140 00:04:13.140 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.140 suites 1 1 n/a 0 0 00:04:13.140 tests 4 4 4 0 0 00:04:13.140 asserts 152 152 152 0 n/a 00:04:13.140 00:04:13.140 Elapsed time = 0.308 seconds 00:04:13.140 00:04:13.140 real 0m0.358s 00:04:13.140 user 0m0.313s 00:04:13.140 sys 0m0.032s 00:04:13.140 ************************************ 00:04:13.140 END TEST env_memory 00:04:13.140 ************************************ 00:04:13.140 10:32:34 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.140 10:32:34 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:13.140 10:32:34 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:13.140 10:32:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.140 10:32:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.140 10:32:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.140 ************************************ 00:04:13.140 START TEST env_vtophys 00:04:13.140 ************************************ 00:04:13.140 10:32:34 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:13.140 EAL: lib.eal log level changed from notice to debug 00:04:13.140 EAL: Detected lcore 0 as core 0 on socket 0 00:04:13.140 EAL: Detected lcore 1 as core 0 on socket 0 00:04:13.140 EAL: Detected lcore 2 as core 0 on socket 0 00:04:13.140 EAL: Detected lcore 3 as core 0 on socket 0 00:04:13.140 EAL: Detected lcore 4 as core 0 on socket 0 00:04:13.140 EAL: Detected lcore 5 as core 0 on socket 0 00:04:13.140 EAL: Detected lcore 6 as core 0 on socket 0 00:04:13.140 EAL: Detected lcore 7 as core 0 on socket 0 00:04:13.140 EAL: Detected lcore 8 as core 0 on socket 0 00:04:13.140 EAL: Detected lcore 9 as core 0 on socket 0 00:04:13.140 EAL: Maximum logical cores by configuration: 128 00:04:13.140 EAL: Detected CPU lcores: 10 00:04:13.140 EAL: Detected NUMA nodes: 1 00:04:13.140 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:13.140 EAL: Detected shared linkage of DPDK 00:04:13.140 EAL: No shared files mode enabled, IPC will be disabled 00:04:13.140 EAL: Selected IOVA mode 'PA' 00:04:13.140 EAL: Probing VFIO support... 00:04:13.140 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:13.140 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:13.140 EAL: Ask a virtual area of 0x2e000 bytes 00:04:13.140 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:13.140 EAL: Setting up physically contiguous memory... 00:04:13.140 EAL: Setting maximum number of open files to 524288 00:04:13.140 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:13.140 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:13.140 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.140 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:13.140 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.140 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.140 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:13.140 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:13.140 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.140 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:13.140 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.140 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.140 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:13.140 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:13.140 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.140 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:13.140 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.140 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.140 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:13.141 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:13.141 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.141 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:13.141 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.141 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.141 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:13.141 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:13.141 EAL: Hugepages will be freed exactly as allocated. 00:04:13.141 EAL: No shared files mode enabled, IPC is disabled 00:04:13.141 EAL: No shared files mode enabled, IPC is disabled 00:04:13.400 EAL: TSC frequency is ~2200000 KHz 00:04:13.400 EAL: Main lcore 0 is ready (tid=7f3c25819a40;cpuset=[0]) 00:04:13.400 EAL: Trying to obtain current memory policy. 00:04:13.400 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.400 EAL: Restoring previous memory policy: 0 00:04:13.400 EAL: request: mp_malloc_sync 00:04:13.401 EAL: No shared files mode enabled, IPC is disabled 00:04:13.401 EAL: Heap on socket 0 was expanded by 2MB 00:04:13.401 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:13.401 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:13.401 EAL: Mem event callback 'spdk:(nil)' registered 00:04:13.401 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:13.401 00:04:13.401 00:04:13.401 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.401 http://cunit.sourceforge.net/ 00:04:13.401 00:04:13.401 00:04:13.401 Suite: components_suite 00:04:13.967 Test: vtophys_malloc_test ...passed 00:04:13.967 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:13.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.967 EAL: Restoring previous memory policy: 4 00:04:13.967 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.967 EAL: request: mp_malloc_sync 00:04:13.967 EAL: No shared files mode enabled, IPC is disabled 00:04:13.967 EAL: Heap on socket 0 was expanded by 4MB 00:04:13.967 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.967 EAL: request: mp_malloc_sync 00:04:13.967 EAL: No shared files mode enabled, IPC is disabled 00:04:13.967 EAL: Heap on socket 0 was shrunk by 4MB 00:04:13.967 EAL: Trying to obtain current memory policy. 00:04:13.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.967 EAL: Restoring previous memory policy: 4 00:04:13.967 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.967 EAL: request: mp_malloc_sync 00:04:13.967 EAL: No shared files mode enabled, IPC is disabled 00:04:13.967 EAL: Heap on socket 0 was expanded by 6MB 00:04:13.967 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.967 EAL: request: mp_malloc_sync 00:04:13.967 EAL: No shared files mode enabled, IPC is disabled 00:04:13.967 EAL: Heap on socket 0 was shrunk by 6MB 00:04:13.967 EAL: Trying to obtain current memory policy. 00:04:13.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.967 EAL: Restoring previous memory policy: 4 00:04:13.967 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.967 EAL: request: mp_malloc_sync 00:04:13.967 EAL: No shared files mode enabled, IPC is disabled 00:04:13.967 EAL: Heap on socket 0 was expanded by 10MB 00:04:13.967 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.967 EAL: request: mp_malloc_sync 00:04:13.967 EAL: No shared files mode enabled, IPC is disabled 00:04:13.967 EAL: Heap on socket 0 was shrunk by 10MB 00:04:13.967 EAL: Trying to obtain current memory policy. 00:04:13.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.967 EAL: Restoring previous memory policy: 4 00:04:13.967 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.967 EAL: request: mp_malloc_sync 00:04:13.967 EAL: No shared files mode enabled, IPC is disabled 00:04:13.967 EAL: Heap on socket 0 was expanded by 18MB 00:04:13.967 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.967 EAL: request: mp_malloc_sync 00:04:13.967 EAL: No shared files mode enabled, IPC is disabled 00:04:13.967 EAL: Heap on socket 0 was shrunk by 18MB 00:04:13.967 EAL: Trying to obtain current memory policy. 00:04:13.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.967 EAL: Restoring previous memory policy: 4 00:04:13.967 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.967 EAL: request: mp_malloc_sync 00:04:13.967 EAL: No shared files mode enabled, IPC is disabled 00:04:13.967 EAL: Heap on socket 0 was expanded by 34MB 00:04:13.967 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.967 EAL: request: mp_malloc_sync 00:04:13.967 EAL: No shared files mode enabled, IPC is disabled 00:04:13.967 EAL: Heap on socket 0 was shrunk by 34MB 00:04:13.967 EAL: Trying to obtain current memory policy. 00:04:13.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.967 EAL: Restoring previous memory policy: 4 00:04:13.967 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.967 EAL: request: mp_malloc_sync 00:04:13.967 EAL: No shared files mode enabled, IPC is disabled 00:04:13.967 EAL: Heap on socket 0 was expanded by 66MB 00:04:14.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.226 EAL: request: mp_malloc_sync 00:04:14.226 EAL: No shared files mode enabled, IPC is disabled 00:04:14.226 EAL: Heap on socket 0 was shrunk by 66MB 00:04:14.226 EAL: Trying to obtain current memory policy. 00:04:14.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.226 EAL: Restoring previous memory policy: 4 00:04:14.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.226 EAL: request: mp_malloc_sync 00:04:14.226 EAL: No shared files mode enabled, IPC is disabled 00:04:14.226 EAL: Heap on socket 0 was expanded by 130MB 00:04:14.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.485 EAL: request: mp_malloc_sync 00:04:14.485 EAL: No shared files mode enabled, IPC is disabled 00:04:14.485 EAL: Heap on socket 0 was shrunk by 130MB 00:04:14.744 EAL: Trying to obtain current memory policy. 00:04:14.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.744 EAL: Restoring previous memory policy: 4 00:04:14.744 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.744 EAL: request: mp_malloc_sync 00:04:14.744 EAL: No shared files mode enabled, IPC is disabled 00:04:14.744 EAL: Heap on socket 0 was expanded by 258MB 00:04:15.311 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.311 EAL: request: mp_malloc_sync 00:04:15.311 EAL: No shared files mode enabled, IPC is disabled 00:04:15.311 EAL: Heap on socket 0 was shrunk by 258MB 00:04:15.582 EAL: Trying to obtain current memory policy. 00:04:15.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.840 EAL: Restoring previous memory policy: 4 00:04:15.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.840 EAL: request: mp_malloc_sync 00:04:15.840 EAL: No shared files mode enabled, IPC is disabled 00:04:15.840 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.776 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.776 EAL: request: mp_malloc_sync 00:04:16.776 EAL: No shared files mode enabled, IPC is disabled 00:04:16.776 EAL: Heap on socket 0 was shrunk by 514MB 00:04:17.340 EAL: Trying to obtain current memory policy. 00:04:17.340 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.598 EAL: Restoring previous memory policy: 4 00:04:17.598 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.598 EAL: request: mp_malloc_sync 00:04:17.598 EAL: No shared files mode enabled, IPC is disabled 00:04:17.598 EAL: Heap on socket 0 was expanded by 1026MB 00:04:19.497 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.497 EAL: request: mp_malloc_sync 00:04:19.497 EAL: No shared files mode enabled, IPC is disabled 00:04:19.497 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:20.893 passed 00:04:20.894 00:04:20.894 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.894 suites 1 1 n/a 0 0 00:04:20.894 tests 2 2 2 0 0 00:04:20.894 asserts 5593 5593 5593 0 n/a 00:04:20.894 00:04:20.894 Elapsed time = 7.571 seconds 00:04:20.894 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.894 EAL: request: mp_malloc_sync 00:04:20.894 EAL: No shared files mode enabled, IPC is disabled 00:04:20.894 EAL: Heap on socket 0 was shrunk by 2MB 00:04:20.894 EAL: No shared files mode enabled, IPC is disabled 00:04:20.894 EAL: No shared files mode enabled, IPC is disabled 00:04:20.894 EAL: No shared files mode enabled, IPC is disabled 00:04:21.152 00:04:21.152 real 0m7.919s 00:04:21.152 user 0m6.683s 00:04:21.152 sys 0m1.065s 00:04:21.152 10:32:42 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.152 10:32:42 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:21.152 ************************************ 00:04:21.152 END TEST env_vtophys 00:04:21.152 ************************************ 00:04:21.152 10:32:42 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:21.152 10:32:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.152 10:32:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.152 10:32:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.152 ************************************ 00:04:21.152 START TEST env_pci 00:04:21.152 ************************************ 00:04:21.152 10:32:42 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:21.152 00:04:21.152 00:04:21.152 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.152 http://cunit.sourceforge.net/ 00:04:21.152 00:04:21.152 00:04:21.152 Suite: pci 00:04:21.152 Test: pci_hook ...[2024-11-15 10:32:42.164447] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56703 has claimed it 00:04:21.152 EAL: Cannot find device (10000:00:01.0) 00:04:21.152 EAL: Failed to attach device on primary process 00:04:21.152 passed 00:04:21.152 00:04:21.152 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.152 suites 1 1 n/a 0 0 00:04:21.152 tests 1 1 1 0 0 00:04:21.152 asserts 25 25 25 0 n/a 00:04:21.152 00:04:21.152 Elapsed time = 0.009 seconds 00:04:21.152 00:04:21.152 real 0m0.083s 00:04:21.152 user 0m0.036s 00:04:21.152 sys 0m0.046s 00:04:21.152 10:32:42 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.152 ************************************ 00:04:21.152 10:32:42 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:21.153 END TEST env_pci 00:04:21.153 ************************************ 00:04:21.153 10:32:42 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:21.153 10:32:42 env -- env/env.sh@15 -- # uname 00:04:21.153 10:32:42 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:21.153 10:32:42 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:21.153 10:32:42 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.153 10:32:42 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:21.153 10:32:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.153 10:32:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.153 ************************************ 00:04:21.153 START TEST env_dpdk_post_init 00:04:21.153 ************************************ 00:04:21.153 10:32:42 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.411 EAL: Detected CPU lcores: 10 00:04:21.411 EAL: Detected NUMA nodes: 1 00:04:21.411 EAL: Detected shared linkage of DPDK 00:04:21.411 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.411 EAL: Selected IOVA mode 'PA' 00:04:21.411 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.411 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:21.411 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:21.411 Starting DPDK initialization... 00:04:21.411 Starting SPDK post initialization... 00:04:21.411 SPDK NVMe probe 00:04:21.411 Attaching to 0000:00:10.0 00:04:21.411 Attaching to 0000:00:11.0 00:04:21.411 Attached to 0000:00:10.0 00:04:21.411 Attached to 0000:00:11.0 00:04:21.411 Cleaning up... 00:04:21.411 00:04:21.411 real 0m0.306s 00:04:21.411 user 0m0.111s 00:04:21.411 sys 0m0.094s 00:04:21.411 10:32:42 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.411 ************************************ 00:04:21.411 END TEST env_dpdk_post_init 00:04:21.411 ************************************ 00:04:21.411 10:32:42 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.670 10:32:42 env -- env/env.sh@26 -- # uname 00:04:21.670 10:32:42 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:21.670 10:32:42 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:21.670 10:32:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.670 10:32:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.670 10:32:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.670 ************************************ 00:04:21.670 START TEST env_mem_callbacks 00:04:21.670 ************************************ 00:04:21.670 10:32:42 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:21.670 EAL: Detected CPU lcores: 10 00:04:21.670 EAL: Detected NUMA nodes: 1 00:04:21.670 EAL: Detected shared linkage of DPDK 00:04:21.670 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.670 EAL: Selected IOVA mode 'PA' 00:04:21.670 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.670 00:04:21.670 00:04:21.670 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.670 http://cunit.sourceforge.net/ 00:04:21.670 00:04:21.670 00:04:21.670 Suite: memory 00:04:21.670 Test: test ... 00:04:21.670 register 0x200000200000 2097152 00:04:21.670 malloc 3145728 00:04:21.670 register 0x200000400000 4194304 00:04:21.670 buf 0x2000004fffc0 len 3145728 PASSED 00:04:21.670 malloc 64 00:04:21.670 buf 0x2000004ffec0 len 64 PASSED 00:04:21.670 malloc 4194304 00:04:21.670 register 0x200000800000 6291456 00:04:21.929 buf 0x2000009fffc0 len 4194304 PASSED 00:04:21.929 free 0x2000004fffc0 3145728 00:04:21.929 free 0x2000004ffec0 64 00:04:21.929 unregister 0x200000400000 4194304 PASSED 00:04:21.929 free 0x2000009fffc0 4194304 00:04:21.929 unregister 0x200000800000 6291456 PASSED 00:04:21.929 malloc 8388608 00:04:21.929 register 0x200000400000 10485760 00:04:21.929 buf 0x2000005fffc0 len 8388608 PASSED 00:04:21.929 free 0x2000005fffc0 8388608 00:04:21.929 unregister 0x200000400000 10485760 PASSED 00:04:21.929 passed 00:04:21.929 00:04:21.929 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.929 suites 1 1 n/a 0 0 00:04:21.929 tests 1 1 1 0 0 00:04:21.929 asserts 15 15 15 0 n/a 00:04:21.929 00:04:21.929 Elapsed time = 0.079 seconds 00:04:21.929 00:04:21.929 real 0m0.288s 00:04:21.929 user 0m0.113s 00:04:21.929 sys 0m0.073s 00:04:21.929 10:32:42 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.929 ************************************ 00:04:21.929 END TEST env_mem_callbacks 00:04:21.929 ************************************ 00:04:21.929 10:32:42 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:21.929 00:04:21.929 real 0m9.450s 00:04:21.929 user 0m7.482s 00:04:21.929 sys 0m1.567s 00:04:21.929 10:32:42 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.929 ************************************ 00:04:21.929 END TEST env 00:04:21.929 ************************************ 00:04:21.929 10:32:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.929 10:32:42 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:21.929 10:32:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.929 10:32:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.929 10:32:42 -- common/autotest_common.sh@10 -- # set +x 00:04:21.929 ************************************ 00:04:21.929 START TEST rpc 00:04:21.930 ************************************ 00:04:21.930 10:32:42 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:21.930 * Looking for test storage... 00:04:21.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:21.930 10:32:43 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.930 10:32:43 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.930 10:32:43 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:22.189 10:32:43 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:22.189 10:32:43 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.189 10:32:43 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.189 10:32:43 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.189 10:32:43 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.189 10:32:43 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.189 10:32:43 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.189 10:32:43 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.189 10:32:43 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.189 10:32:43 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.189 10:32:43 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.189 10:32:43 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.189 10:32:43 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:22.189 10:32:43 rpc -- scripts/common.sh@345 -- # : 1 00:04:22.189 10:32:43 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.189 10:32:43 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.189 10:32:43 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:22.189 10:32:43 rpc -- scripts/common.sh@353 -- # local d=1 00:04:22.189 10:32:43 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.189 10:32:43 rpc -- scripts/common.sh@355 -- # echo 1 00:04:22.189 10:32:43 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.189 10:32:43 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:22.189 10:32:43 rpc -- scripts/common.sh@353 -- # local d=2 00:04:22.189 10:32:43 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.189 10:32:43 rpc -- scripts/common.sh@355 -- # echo 2 00:04:22.189 10:32:43 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.189 10:32:43 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.189 10:32:43 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.189 10:32:43 rpc -- scripts/common.sh@368 -- # return 0 00:04:22.189 10:32:43 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.189 10:32:43 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:22.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.189 --rc genhtml_branch_coverage=1 00:04:22.189 --rc genhtml_function_coverage=1 00:04:22.189 --rc genhtml_legend=1 00:04:22.189 --rc geninfo_all_blocks=1 00:04:22.189 --rc geninfo_unexecuted_blocks=1 00:04:22.189 00:04:22.189 ' 00:04:22.189 10:32:43 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:22.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.189 --rc genhtml_branch_coverage=1 00:04:22.189 --rc genhtml_function_coverage=1 00:04:22.189 --rc genhtml_legend=1 00:04:22.189 --rc geninfo_all_blocks=1 00:04:22.189 --rc geninfo_unexecuted_blocks=1 00:04:22.189 00:04:22.189 ' 00:04:22.189 10:32:43 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:22.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.189 --rc genhtml_branch_coverage=1 00:04:22.189 --rc genhtml_function_coverage=1 00:04:22.189 --rc genhtml_legend=1 00:04:22.189 --rc geninfo_all_blocks=1 00:04:22.189 --rc geninfo_unexecuted_blocks=1 00:04:22.189 00:04:22.189 ' 00:04:22.189 10:32:43 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:22.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.189 --rc genhtml_branch_coverage=1 00:04:22.189 --rc genhtml_function_coverage=1 00:04:22.189 --rc genhtml_legend=1 00:04:22.189 --rc geninfo_all_blocks=1 00:04:22.189 --rc geninfo_unexecuted_blocks=1 00:04:22.189 00:04:22.189 ' 00:04:22.189 10:32:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56830 00:04:22.189 10:32:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.189 10:32:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56830 00:04:22.189 10:32:43 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:22.189 10:32:43 rpc -- common/autotest_common.sh@835 -- # '[' -z 56830 ']' 00:04:22.189 10:32:43 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.189 10:32:43 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.189 10:32:43 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.189 10:32:43 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.189 10:32:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.189 [2024-11-15 10:32:43.334926] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:04:22.189 [2024-11-15 10:32:43.335154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56830 ] 00:04:22.452 [2024-11-15 10:32:43.528333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.710 [2024-11-15 10:32:43.686096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:22.710 [2024-11-15 10:32:43.686193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56830' to capture a snapshot of events at runtime. 00:04:22.710 [2024-11-15 10:32:43.686215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:22.710 [2024-11-15 10:32:43.686234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:22.710 [2024-11-15 10:32:43.686249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56830 for offline analysis/debug. 00:04:22.710 [2024-11-15 10:32:43.687891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.673 10:32:44 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.673 10:32:44 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:23.673 10:32:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.673 10:32:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.673 10:32:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:23.673 10:32:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:23.673 10:32:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.673 10:32:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.673 10:32:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.673 ************************************ 00:04:23.673 START TEST rpc_integrity 00:04:23.673 ************************************ 00:04:23.673 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:23.673 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.673 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.673 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.673 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.673 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.673 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.673 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.673 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.673 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.673 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.673 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.673 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:23.673 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.673 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.673 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.673 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.673 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.673 { 00:04:23.673 "name": "Malloc0", 00:04:23.673 "aliases": [ 00:04:23.673 "288b5e39-d3a2-4bef-8c53-ed6b22eb93f1" 00:04:23.673 ], 00:04:23.673 "product_name": "Malloc disk", 00:04:23.673 "block_size": 512, 00:04:23.673 "num_blocks": 16384, 00:04:23.673 "uuid": "288b5e39-d3a2-4bef-8c53-ed6b22eb93f1", 00:04:23.673 "assigned_rate_limits": { 00:04:23.673 "rw_ios_per_sec": 0, 00:04:23.673 "rw_mbytes_per_sec": 0, 00:04:23.673 "r_mbytes_per_sec": 0, 00:04:23.673 "w_mbytes_per_sec": 0 00:04:23.673 }, 00:04:23.673 "claimed": false, 00:04:23.673 "zoned": false, 00:04:23.673 "supported_io_types": { 00:04:23.673 "read": true, 00:04:23.673 "write": true, 00:04:23.673 "unmap": true, 00:04:23.673 "flush": true, 00:04:23.673 "reset": true, 00:04:23.673 "nvme_admin": false, 00:04:23.673 "nvme_io": false, 00:04:23.673 "nvme_io_md": false, 00:04:23.673 "write_zeroes": true, 00:04:23.673 "zcopy": true, 00:04:23.673 "get_zone_info": false, 00:04:23.673 "zone_management": false, 00:04:23.673 "zone_append": false, 00:04:23.673 "compare": false, 00:04:23.673 "compare_and_write": false, 00:04:23.673 "abort": true, 00:04:23.673 "seek_hole": false, 00:04:23.673 "seek_data": false, 00:04:23.673 "copy": true, 00:04:23.673 "nvme_iov_md": false 00:04:23.673 }, 00:04:23.674 "memory_domains": [ 00:04:23.674 { 00:04:23.674 "dma_device_id": "system", 00:04:23.674 "dma_device_type": 1 00:04:23.674 }, 00:04:23.674 { 00:04:23.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.674 "dma_device_type": 2 00:04:23.674 } 00:04:23.674 ], 00:04:23.674 "driver_specific": {} 00:04:23.674 } 00:04:23.674 ]' 00:04:23.674 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.674 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.674 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:23.674 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.674 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.674 [2024-11-15 10:32:44.750641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:23.674 [2024-11-15 10:32:44.750743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.674 [2024-11-15 10:32:44.750780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:23.674 [2024-11-15 10:32:44.750804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.674 [2024-11-15 10:32:44.753917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.674 [2024-11-15 10:32:44.754106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.674 Passthru0 00:04:23.674 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.674 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.674 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.674 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.674 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.674 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.674 { 00:04:23.674 "name": "Malloc0", 00:04:23.674 "aliases": [ 00:04:23.674 "288b5e39-d3a2-4bef-8c53-ed6b22eb93f1" 00:04:23.674 ], 00:04:23.674 "product_name": "Malloc disk", 00:04:23.674 "block_size": 512, 00:04:23.674 "num_blocks": 16384, 00:04:23.674 "uuid": "288b5e39-d3a2-4bef-8c53-ed6b22eb93f1", 00:04:23.674 "assigned_rate_limits": { 00:04:23.674 "rw_ios_per_sec": 0, 00:04:23.674 "rw_mbytes_per_sec": 0, 00:04:23.674 "r_mbytes_per_sec": 0, 00:04:23.674 "w_mbytes_per_sec": 0 00:04:23.674 }, 00:04:23.674 "claimed": true, 00:04:23.674 "claim_type": "exclusive_write", 00:04:23.674 "zoned": false, 00:04:23.674 "supported_io_types": { 00:04:23.674 "read": true, 00:04:23.674 "write": true, 00:04:23.674 "unmap": true, 00:04:23.674 "flush": true, 00:04:23.674 "reset": true, 00:04:23.674 "nvme_admin": false, 00:04:23.674 "nvme_io": false, 00:04:23.674 "nvme_io_md": false, 00:04:23.674 "write_zeroes": true, 00:04:23.674 "zcopy": true, 00:04:23.674 "get_zone_info": false, 00:04:23.674 "zone_management": false, 00:04:23.674 "zone_append": false, 00:04:23.674 "compare": false, 00:04:23.674 "compare_and_write": false, 00:04:23.674 "abort": true, 00:04:23.674 "seek_hole": false, 00:04:23.674 "seek_data": false, 00:04:23.674 "copy": true, 00:04:23.674 "nvme_iov_md": false 00:04:23.674 }, 00:04:23.674 "memory_domains": [ 00:04:23.674 { 00:04:23.674 "dma_device_id": "system", 00:04:23.674 "dma_device_type": 1 00:04:23.674 }, 00:04:23.674 { 00:04:23.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.674 "dma_device_type": 2 00:04:23.674 } 00:04:23.674 ], 00:04:23.674 "driver_specific": {} 00:04:23.674 }, 00:04:23.674 { 00:04:23.674 "name": "Passthru0", 00:04:23.674 "aliases": [ 00:04:23.674 "3a81b735-e6b9-59fa-a5ff-29bb2117abf6" 00:04:23.674 ], 00:04:23.674 "product_name": "passthru", 00:04:23.674 "block_size": 512, 00:04:23.674 "num_blocks": 16384, 00:04:23.674 "uuid": "3a81b735-e6b9-59fa-a5ff-29bb2117abf6", 00:04:23.674 "assigned_rate_limits": { 00:04:23.674 "rw_ios_per_sec": 0, 00:04:23.674 "rw_mbytes_per_sec": 0, 00:04:23.674 "r_mbytes_per_sec": 0, 00:04:23.674 "w_mbytes_per_sec": 0 00:04:23.674 }, 00:04:23.674 "claimed": false, 00:04:23.674 "zoned": false, 00:04:23.674 "supported_io_types": { 00:04:23.674 "read": true, 00:04:23.674 "write": true, 00:04:23.674 "unmap": true, 00:04:23.674 "flush": true, 00:04:23.674 "reset": true, 00:04:23.674 "nvme_admin": false, 00:04:23.674 "nvme_io": false, 00:04:23.674 "nvme_io_md": false, 00:04:23.674 "write_zeroes": true, 00:04:23.674 "zcopy": true, 00:04:23.674 "get_zone_info": false, 00:04:23.674 "zone_management": false, 00:04:23.674 "zone_append": false, 00:04:23.674 "compare": false, 00:04:23.674 "compare_and_write": false, 00:04:23.674 "abort": true, 00:04:23.674 "seek_hole": false, 00:04:23.674 "seek_data": false, 00:04:23.674 "copy": true, 00:04:23.674 "nvme_iov_md": false 00:04:23.674 }, 00:04:23.674 "memory_domains": [ 00:04:23.674 { 00:04:23.674 "dma_device_id": "system", 00:04:23.674 "dma_device_type": 1 00:04:23.674 }, 00:04:23.674 { 00:04:23.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.674 "dma_device_type": 2 00:04:23.674 } 00:04:23.674 ], 00:04:23.674 "driver_specific": { 00:04:23.674 "passthru": { 00:04:23.674 "name": "Passthru0", 00:04:23.674 "base_bdev_name": "Malloc0" 00:04:23.674 } 00:04:23.674 } 00:04:23.674 } 00:04:23.674 ]' 00:04:23.674 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.932 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.932 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.932 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.932 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.932 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.932 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:23.932 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.932 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.932 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.932 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.932 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.932 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.932 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.932 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.932 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.932 ************************************ 00:04:23.932 END TEST rpc_integrity 00:04:23.932 ************************************ 00:04:23.932 10:32:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.932 00:04:23.932 real 0m0.356s 00:04:23.932 user 0m0.215s 00:04:23.932 sys 0m0.046s 00:04:23.932 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.932 10:32:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.932 10:32:44 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:23.932 10:32:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.932 10:32:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.932 10:32:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.932 ************************************ 00:04:23.932 START TEST rpc_plugins 00:04:23.932 ************************************ 00:04:23.932 10:32:44 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:23.932 10:32:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:23.932 10:32:44 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.932 10:32:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.932 10:32:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.932 10:32:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:23.932 10:32:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:23.932 10:32:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.932 10:32:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.932 10:32:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.932 10:32:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:23.932 { 00:04:23.932 "name": "Malloc1", 00:04:23.932 "aliases": [ 00:04:23.932 "b94ca7ff-d2b2-4863-b8c9-d4a4b478000e" 00:04:23.932 ], 00:04:23.932 "product_name": "Malloc disk", 00:04:23.932 "block_size": 4096, 00:04:23.932 "num_blocks": 256, 00:04:23.932 "uuid": "b94ca7ff-d2b2-4863-b8c9-d4a4b478000e", 00:04:23.932 "assigned_rate_limits": { 00:04:23.932 "rw_ios_per_sec": 0, 00:04:23.932 "rw_mbytes_per_sec": 0, 00:04:23.932 "r_mbytes_per_sec": 0, 00:04:23.932 "w_mbytes_per_sec": 0 00:04:23.932 }, 00:04:23.932 "claimed": false, 00:04:23.932 "zoned": false, 00:04:23.932 "supported_io_types": { 00:04:23.932 "read": true, 00:04:23.932 "write": true, 00:04:23.932 "unmap": true, 00:04:23.932 "flush": true, 00:04:23.932 "reset": true, 00:04:23.932 "nvme_admin": false, 00:04:23.932 "nvme_io": false, 00:04:23.932 "nvme_io_md": false, 00:04:23.932 "write_zeroes": true, 00:04:23.932 "zcopy": true, 00:04:23.932 "get_zone_info": false, 00:04:23.932 "zone_management": false, 00:04:23.932 "zone_append": false, 00:04:23.932 "compare": false, 00:04:23.932 "compare_and_write": false, 00:04:23.932 "abort": true, 00:04:23.932 "seek_hole": false, 00:04:23.932 "seek_data": false, 00:04:23.932 "copy": true, 00:04:23.933 "nvme_iov_md": false 00:04:23.933 }, 00:04:23.933 "memory_domains": [ 00:04:23.933 { 00:04:23.933 "dma_device_id": "system", 00:04:23.933 "dma_device_type": 1 00:04:23.933 }, 00:04:23.933 { 00:04:23.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.933 "dma_device_type": 2 00:04:23.933 } 00:04:23.933 ], 00:04:23.933 "driver_specific": {} 00:04:23.933 } 00:04:23.933 ]' 00:04:23.933 10:32:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:23.933 10:32:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:23.933 10:32:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:23.933 10:32:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.933 10:32:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.191 10:32:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.191 10:32:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:24.191 10:32:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.191 10:32:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.191 10:32:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.191 10:32:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:24.191 10:32:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:24.191 ************************************ 00:04:24.191 END TEST rpc_plugins 00:04:24.191 ************************************ 00:04:24.191 10:32:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:24.191 00:04:24.191 real 0m0.182s 00:04:24.191 user 0m0.121s 00:04:24.191 sys 0m0.019s 00:04:24.191 10:32:45 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.191 10:32:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.191 10:32:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:24.191 10:32:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.191 10:32:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.191 10:32:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.191 ************************************ 00:04:24.191 START TEST rpc_trace_cmd_test 00:04:24.191 ************************************ 00:04:24.191 10:32:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:24.191 10:32:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:24.191 10:32:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:24.191 10:32:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.191 10:32:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.191 10:32:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.191 10:32:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:24.191 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56830", 00:04:24.191 "tpoint_group_mask": "0x8", 00:04:24.191 "iscsi_conn": { 00:04:24.191 "mask": "0x2", 00:04:24.191 "tpoint_mask": "0x0" 00:04:24.191 }, 00:04:24.191 "scsi": { 00:04:24.191 "mask": "0x4", 00:04:24.191 "tpoint_mask": "0x0" 00:04:24.191 }, 00:04:24.191 "bdev": { 00:04:24.191 "mask": "0x8", 00:04:24.191 "tpoint_mask": "0xffffffffffffffff" 00:04:24.191 }, 00:04:24.191 "nvmf_rdma": { 00:04:24.191 "mask": "0x10", 00:04:24.191 "tpoint_mask": "0x0" 00:04:24.191 }, 00:04:24.191 "nvmf_tcp": { 00:04:24.191 "mask": "0x20", 00:04:24.191 "tpoint_mask": "0x0" 00:04:24.191 }, 00:04:24.191 "ftl": { 00:04:24.191 "mask": "0x40", 00:04:24.192 "tpoint_mask": "0x0" 00:04:24.192 }, 00:04:24.192 "blobfs": { 00:04:24.192 "mask": "0x80", 00:04:24.192 "tpoint_mask": "0x0" 00:04:24.192 }, 00:04:24.192 "dsa": { 00:04:24.192 "mask": "0x200", 00:04:24.192 "tpoint_mask": "0x0" 00:04:24.192 }, 00:04:24.192 "thread": { 00:04:24.192 "mask": "0x400", 00:04:24.192 "tpoint_mask": "0x0" 00:04:24.192 }, 00:04:24.192 "nvme_pcie": { 00:04:24.192 "mask": "0x800", 00:04:24.192 "tpoint_mask": "0x0" 00:04:24.192 }, 00:04:24.192 "iaa": { 00:04:24.192 "mask": "0x1000", 00:04:24.192 "tpoint_mask": "0x0" 00:04:24.192 }, 00:04:24.192 "nvme_tcp": { 00:04:24.192 "mask": "0x2000", 00:04:24.192 "tpoint_mask": "0x0" 00:04:24.192 }, 00:04:24.192 "bdev_nvme": { 00:04:24.192 "mask": "0x4000", 00:04:24.192 "tpoint_mask": "0x0" 00:04:24.192 }, 00:04:24.192 "sock": { 00:04:24.192 "mask": "0x8000", 00:04:24.192 "tpoint_mask": "0x0" 00:04:24.192 }, 00:04:24.192 "blob": { 00:04:24.192 "mask": "0x10000", 00:04:24.192 "tpoint_mask": "0x0" 00:04:24.192 }, 00:04:24.192 "bdev_raid": { 00:04:24.192 "mask": "0x20000", 00:04:24.192 "tpoint_mask": "0x0" 00:04:24.192 }, 00:04:24.192 "scheduler": { 00:04:24.192 "mask": "0x40000", 00:04:24.192 "tpoint_mask": "0x0" 00:04:24.192 } 00:04:24.192 }' 00:04:24.192 10:32:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:24.192 10:32:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:24.192 10:32:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:24.192 10:32:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:24.192 10:32:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:24.450 10:32:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:24.450 10:32:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:24.450 10:32:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:24.450 10:32:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:24.450 ************************************ 00:04:24.450 END TEST rpc_trace_cmd_test 00:04:24.450 ************************************ 00:04:24.450 10:32:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:24.450 00:04:24.450 real 0m0.257s 00:04:24.450 user 0m0.215s 00:04:24.450 sys 0m0.033s 00:04:24.450 10:32:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.450 10:32:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.450 10:32:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:24.450 10:32:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:24.450 10:32:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:24.450 10:32:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.450 10:32:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.450 10:32:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.450 ************************************ 00:04:24.450 START TEST rpc_daemon_integrity 00:04:24.450 ************************************ 00:04:24.450 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:24.450 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:24.450 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.450 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.450 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.450 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:24.450 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:24.450 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:24.450 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:24.450 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.450 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.709 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.709 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:24.709 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:24.709 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.709 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.709 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.709 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:24.709 { 00:04:24.709 "name": "Malloc2", 00:04:24.709 "aliases": [ 00:04:24.709 "dcadb111-0117-4dd2-b38c-c3c6a84dd05f" 00:04:24.709 ], 00:04:24.709 "product_name": "Malloc disk", 00:04:24.709 "block_size": 512, 00:04:24.709 "num_blocks": 16384, 00:04:24.709 "uuid": "dcadb111-0117-4dd2-b38c-c3c6a84dd05f", 00:04:24.709 "assigned_rate_limits": { 00:04:24.709 "rw_ios_per_sec": 0, 00:04:24.709 "rw_mbytes_per_sec": 0, 00:04:24.709 "r_mbytes_per_sec": 0, 00:04:24.709 "w_mbytes_per_sec": 0 00:04:24.709 }, 00:04:24.709 "claimed": false, 00:04:24.709 "zoned": false, 00:04:24.709 "supported_io_types": { 00:04:24.709 "read": true, 00:04:24.709 "write": true, 00:04:24.709 "unmap": true, 00:04:24.709 "flush": true, 00:04:24.709 "reset": true, 00:04:24.709 "nvme_admin": false, 00:04:24.709 "nvme_io": false, 00:04:24.709 "nvme_io_md": false, 00:04:24.709 "write_zeroes": true, 00:04:24.709 "zcopy": true, 00:04:24.709 "get_zone_info": false, 00:04:24.709 "zone_management": false, 00:04:24.709 "zone_append": false, 00:04:24.709 "compare": false, 00:04:24.710 "compare_and_write": false, 00:04:24.710 "abort": true, 00:04:24.710 "seek_hole": false, 00:04:24.710 "seek_data": false, 00:04:24.710 "copy": true, 00:04:24.710 "nvme_iov_md": false 00:04:24.710 }, 00:04:24.710 "memory_domains": [ 00:04:24.710 { 00:04:24.710 "dma_device_id": "system", 00:04:24.710 "dma_device_type": 1 00:04:24.710 }, 00:04:24.710 { 00:04:24.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.710 "dma_device_type": 2 00:04:24.710 } 00:04:24.710 ], 00:04:24.710 "driver_specific": {} 00:04:24.710 } 00:04:24.710 ]' 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.710 [2024-11-15 10:32:45.690793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:24.710 [2024-11-15 10:32:45.690889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:24.710 [2024-11-15 10:32:45.690926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:24.710 [2024-11-15 10:32:45.690946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:24.710 [2024-11-15 10:32:45.694049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:24.710 [2024-11-15 10:32:45.694107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:24.710 Passthru0 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:24.710 { 00:04:24.710 "name": "Malloc2", 00:04:24.710 "aliases": [ 00:04:24.710 "dcadb111-0117-4dd2-b38c-c3c6a84dd05f" 00:04:24.710 ], 00:04:24.710 "product_name": "Malloc disk", 00:04:24.710 "block_size": 512, 00:04:24.710 "num_blocks": 16384, 00:04:24.710 "uuid": "dcadb111-0117-4dd2-b38c-c3c6a84dd05f", 00:04:24.710 "assigned_rate_limits": { 00:04:24.710 "rw_ios_per_sec": 0, 00:04:24.710 "rw_mbytes_per_sec": 0, 00:04:24.710 "r_mbytes_per_sec": 0, 00:04:24.710 "w_mbytes_per_sec": 0 00:04:24.710 }, 00:04:24.710 "claimed": true, 00:04:24.710 "claim_type": "exclusive_write", 00:04:24.710 "zoned": false, 00:04:24.710 "supported_io_types": { 00:04:24.710 "read": true, 00:04:24.710 "write": true, 00:04:24.710 "unmap": true, 00:04:24.710 "flush": true, 00:04:24.710 "reset": true, 00:04:24.710 "nvme_admin": false, 00:04:24.710 "nvme_io": false, 00:04:24.710 "nvme_io_md": false, 00:04:24.710 "write_zeroes": true, 00:04:24.710 "zcopy": true, 00:04:24.710 "get_zone_info": false, 00:04:24.710 "zone_management": false, 00:04:24.710 "zone_append": false, 00:04:24.710 "compare": false, 00:04:24.710 "compare_and_write": false, 00:04:24.710 "abort": true, 00:04:24.710 "seek_hole": false, 00:04:24.710 "seek_data": false, 00:04:24.710 "copy": true, 00:04:24.710 "nvme_iov_md": false 00:04:24.710 }, 00:04:24.710 "memory_domains": [ 00:04:24.710 { 00:04:24.710 "dma_device_id": "system", 00:04:24.710 "dma_device_type": 1 00:04:24.710 }, 00:04:24.710 { 00:04:24.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.710 "dma_device_type": 2 00:04:24.710 } 00:04:24.710 ], 00:04:24.710 "driver_specific": {} 00:04:24.710 }, 00:04:24.710 { 00:04:24.710 "name": "Passthru0", 00:04:24.710 "aliases": [ 00:04:24.710 "c07c88ce-b49e-5c9d-b5a0-1a35f5f85673" 00:04:24.710 ], 00:04:24.710 "product_name": "passthru", 00:04:24.710 "block_size": 512, 00:04:24.710 "num_blocks": 16384, 00:04:24.710 "uuid": "c07c88ce-b49e-5c9d-b5a0-1a35f5f85673", 00:04:24.710 "assigned_rate_limits": { 00:04:24.710 "rw_ios_per_sec": 0, 00:04:24.710 "rw_mbytes_per_sec": 0, 00:04:24.710 "r_mbytes_per_sec": 0, 00:04:24.710 "w_mbytes_per_sec": 0 00:04:24.710 }, 00:04:24.710 "claimed": false, 00:04:24.710 "zoned": false, 00:04:24.710 "supported_io_types": { 00:04:24.710 "read": true, 00:04:24.710 "write": true, 00:04:24.710 "unmap": true, 00:04:24.710 "flush": true, 00:04:24.710 "reset": true, 00:04:24.710 "nvme_admin": false, 00:04:24.710 "nvme_io": false, 00:04:24.710 "nvme_io_md": false, 00:04:24.710 "write_zeroes": true, 00:04:24.710 "zcopy": true, 00:04:24.710 "get_zone_info": false, 00:04:24.710 "zone_management": false, 00:04:24.710 "zone_append": false, 00:04:24.710 "compare": false, 00:04:24.710 "compare_and_write": false, 00:04:24.710 "abort": true, 00:04:24.710 "seek_hole": false, 00:04:24.710 "seek_data": false, 00:04:24.710 "copy": true, 00:04:24.710 "nvme_iov_md": false 00:04:24.710 }, 00:04:24.710 "memory_domains": [ 00:04:24.710 { 00:04:24.710 "dma_device_id": "system", 00:04:24.710 "dma_device_type": 1 00:04:24.710 }, 00:04:24.710 { 00:04:24.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.710 "dma_device_type": 2 00:04:24.710 } 00:04:24.710 ], 00:04:24.710 "driver_specific": { 00:04:24.710 "passthru": { 00:04:24.710 "name": "Passthru0", 00:04:24.710 "base_bdev_name": "Malloc2" 00:04:24.710 } 00:04:24.710 } 00:04:24.710 } 00:04:24.710 ]' 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.710 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:24.968 ************************************ 00:04:24.968 END TEST rpc_daemon_integrity 00:04:24.968 ************************************ 00:04:24.968 10:32:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.968 00:04:24.968 real 0m0.343s 00:04:24.969 user 0m0.204s 00:04:24.969 sys 0m0.040s 00:04:24.969 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.969 10:32:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.969 10:32:45 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:24.969 10:32:45 rpc -- rpc/rpc.sh@84 -- # killprocess 56830 00:04:24.969 10:32:45 rpc -- common/autotest_common.sh@954 -- # '[' -z 56830 ']' 00:04:24.969 10:32:45 rpc -- common/autotest_common.sh@958 -- # kill -0 56830 00:04:24.969 10:32:45 rpc -- common/autotest_common.sh@959 -- # uname 00:04:24.969 10:32:45 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.969 10:32:45 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56830 00:04:24.969 killing process with pid 56830 00:04:24.969 10:32:45 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.969 10:32:45 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.969 10:32:45 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56830' 00:04:24.969 10:32:45 rpc -- common/autotest_common.sh@973 -- # kill 56830 00:04:24.969 10:32:45 rpc -- common/autotest_common.sh@978 -- # wait 56830 00:04:27.518 00:04:27.518 real 0m5.135s 00:04:27.518 user 0m5.819s 00:04:27.518 sys 0m0.902s 00:04:27.518 10:32:48 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.518 10:32:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.518 ************************************ 00:04:27.518 END TEST rpc 00:04:27.518 ************************************ 00:04:27.518 10:32:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:27.518 10:32:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.518 10:32:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.518 10:32:48 -- common/autotest_common.sh@10 -- # set +x 00:04:27.518 ************************************ 00:04:27.518 START TEST skip_rpc 00:04:27.518 ************************************ 00:04:27.518 10:32:48 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:27.518 * Looking for test storage... 00:04:27.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:27.518 10:32:48 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:27.518 10:32:48 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:27.518 10:32:48 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:27.518 10:32:48 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.518 10:32:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:27.518 10:32:48 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.518 10:32:48 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:27.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.518 --rc genhtml_branch_coverage=1 00:04:27.518 --rc genhtml_function_coverage=1 00:04:27.518 --rc genhtml_legend=1 00:04:27.518 --rc geninfo_all_blocks=1 00:04:27.518 --rc geninfo_unexecuted_blocks=1 00:04:27.518 00:04:27.518 ' 00:04:27.518 10:32:48 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:27.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.518 --rc genhtml_branch_coverage=1 00:04:27.518 --rc genhtml_function_coverage=1 00:04:27.518 --rc genhtml_legend=1 00:04:27.518 --rc geninfo_all_blocks=1 00:04:27.518 --rc geninfo_unexecuted_blocks=1 00:04:27.518 00:04:27.518 ' 00:04:27.518 10:32:48 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:27.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.518 --rc genhtml_branch_coverage=1 00:04:27.518 --rc genhtml_function_coverage=1 00:04:27.518 --rc genhtml_legend=1 00:04:27.518 --rc geninfo_all_blocks=1 00:04:27.518 --rc geninfo_unexecuted_blocks=1 00:04:27.518 00:04:27.518 ' 00:04:27.518 10:32:48 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:27.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.518 --rc genhtml_branch_coverage=1 00:04:27.518 --rc genhtml_function_coverage=1 00:04:27.518 --rc genhtml_legend=1 00:04:27.518 --rc geninfo_all_blocks=1 00:04:27.518 --rc geninfo_unexecuted_blocks=1 00:04:27.518 00:04:27.518 ' 00:04:27.518 10:32:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:27.518 10:32:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:27.518 10:32:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:27.518 10:32:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.518 10:32:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.518 10:32:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.518 ************************************ 00:04:27.518 START TEST skip_rpc 00:04:27.518 ************************************ 00:04:27.518 10:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:27.518 10:32:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57059 00:04:27.518 10:32:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.518 10:32:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:27.518 10:32:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:27.518 [2024-11-15 10:32:48.523605] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:04:27.518 [2024-11-15 10:32:48.523955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57059 ] 00:04:27.778 [2024-11-15 10:32:48.709571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.778 [2024-11-15 10:32:48.840349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57059 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57059 ']' 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57059 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57059 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57059' 00:04:33.048 killing process with pid 57059 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57059 00:04:33.048 10:32:53 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57059 00:04:34.947 00:04:34.947 real 0m7.281s 00:04:34.947 user 0m6.693s 00:04:34.947 sys 0m0.482s 00:04:34.947 10:32:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.947 ************************************ 00:04:34.947 END TEST skip_rpc 00:04:34.947 ************************************ 00:04:34.947 10:32:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.947 10:32:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:34.947 10:32:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.947 10:32:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.947 10:32:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.947 ************************************ 00:04:34.947 START TEST skip_rpc_with_json 00:04:34.947 ************************************ 00:04:34.947 10:32:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:34.947 10:32:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:34.947 10:32:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57169 00:04:34.947 10:32:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.947 10:32:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57169 00:04:34.947 10:32:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.947 10:32:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57169 ']' 00:04:34.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.948 10:32:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.948 10:32:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.948 10:32:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.948 10:32:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.948 10:32:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.948 [2024-11-15 10:32:55.849590] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:04:34.948 [2024-11-15 10:32:55.849778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57169 ] 00:04:34.948 [2024-11-15 10:32:56.041810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.205 [2024-11-15 10:32:56.191640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.142 [2024-11-15 10:32:57.074569] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:36.142 request: 00:04:36.142 { 00:04:36.142 "trtype": "tcp", 00:04:36.142 "method": "nvmf_get_transports", 00:04:36.142 "req_id": 1 00:04:36.142 } 00:04:36.142 Got JSON-RPC error response 00:04:36.142 response: 00:04:36.142 { 00:04:36.142 "code": -19, 00:04:36.142 "message": "No such device" 00:04:36.142 } 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.142 [2024-11-15 10:32:57.086792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.142 10:32:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.142 { 00:04:36.142 "subsystems": [ 00:04:36.142 { 00:04:36.142 "subsystem": "fsdev", 00:04:36.142 "config": [ 00:04:36.142 { 00:04:36.142 "method": "fsdev_set_opts", 00:04:36.142 "params": { 00:04:36.142 "fsdev_io_pool_size": 65535, 00:04:36.142 "fsdev_io_cache_size": 256 00:04:36.142 } 00:04:36.142 } 00:04:36.142 ] 00:04:36.142 }, 00:04:36.142 { 00:04:36.142 "subsystem": "keyring", 00:04:36.142 "config": [] 00:04:36.142 }, 00:04:36.142 { 00:04:36.142 "subsystem": "iobuf", 00:04:36.142 "config": [ 00:04:36.142 { 00:04:36.142 "method": "iobuf_set_options", 00:04:36.142 "params": { 00:04:36.142 "small_pool_count": 8192, 00:04:36.142 "large_pool_count": 1024, 00:04:36.142 "small_bufsize": 8192, 00:04:36.142 "large_bufsize": 135168, 00:04:36.142 "enable_numa": false 00:04:36.142 } 00:04:36.142 } 00:04:36.142 ] 00:04:36.142 }, 00:04:36.142 { 00:04:36.142 "subsystem": "sock", 00:04:36.142 "config": [ 00:04:36.142 { 00:04:36.142 "method": "sock_set_default_impl", 00:04:36.142 "params": { 00:04:36.142 "impl_name": "posix" 00:04:36.142 } 00:04:36.142 }, 00:04:36.142 { 00:04:36.142 "method": "sock_impl_set_options", 00:04:36.142 "params": { 00:04:36.142 "impl_name": "ssl", 00:04:36.142 "recv_buf_size": 4096, 00:04:36.142 "send_buf_size": 4096, 00:04:36.142 "enable_recv_pipe": true, 00:04:36.142 "enable_quickack": false, 00:04:36.142 "enable_placement_id": 0, 00:04:36.142 "enable_zerocopy_send_server": true, 00:04:36.142 "enable_zerocopy_send_client": false, 00:04:36.142 "zerocopy_threshold": 0, 00:04:36.142 "tls_version": 0, 00:04:36.142 "enable_ktls": false 00:04:36.142 } 00:04:36.142 }, 00:04:36.142 { 00:04:36.142 "method": "sock_impl_set_options", 00:04:36.142 "params": { 00:04:36.142 "impl_name": "posix", 00:04:36.142 "recv_buf_size": 2097152, 00:04:36.142 "send_buf_size": 2097152, 00:04:36.142 "enable_recv_pipe": true, 00:04:36.142 "enable_quickack": false, 00:04:36.142 "enable_placement_id": 0, 00:04:36.142 "enable_zerocopy_send_server": true, 00:04:36.142 "enable_zerocopy_send_client": false, 00:04:36.142 "zerocopy_threshold": 0, 00:04:36.142 "tls_version": 0, 00:04:36.142 "enable_ktls": false 00:04:36.142 } 00:04:36.142 } 00:04:36.142 ] 00:04:36.142 }, 00:04:36.142 { 00:04:36.142 "subsystem": "vmd", 00:04:36.142 "config": [] 00:04:36.142 }, 00:04:36.142 { 00:04:36.142 "subsystem": "accel", 00:04:36.142 "config": [ 00:04:36.142 { 00:04:36.142 "method": "accel_set_options", 00:04:36.142 "params": { 00:04:36.142 "small_cache_size": 128, 00:04:36.142 "large_cache_size": 16, 00:04:36.142 "task_count": 2048, 00:04:36.142 "sequence_count": 2048, 00:04:36.142 "buf_count": 2048 00:04:36.142 } 00:04:36.142 } 00:04:36.142 ] 00:04:36.142 }, 00:04:36.142 { 00:04:36.142 "subsystem": "bdev", 00:04:36.142 "config": [ 00:04:36.142 { 00:04:36.142 "method": "bdev_set_options", 00:04:36.142 "params": { 00:04:36.143 "bdev_io_pool_size": 65535, 00:04:36.143 "bdev_io_cache_size": 256, 00:04:36.143 "bdev_auto_examine": true, 00:04:36.143 "iobuf_small_cache_size": 128, 00:04:36.143 "iobuf_large_cache_size": 16 00:04:36.143 } 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "method": "bdev_raid_set_options", 00:04:36.143 "params": { 00:04:36.143 "process_window_size_kb": 1024, 00:04:36.143 "process_max_bandwidth_mb_sec": 0 00:04:36.143 } 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "method": "bdev_iscsi_set_options", 00:04:36.143 "params": { 00:04:36.143 "timeout_sec": 30 00:04:36.143 } 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "method": "bdev_nvme_set_options", 00:04:36.143 "params": { 00:04:36.143 "action_on_timeout": "none", 00:04:36.143 "timeout_us": 0, 00:04:36.143 "timeout_admin_us": 0, 00:04:36.143 "keep_alive_timeout_ms": 10000, 00:04:36.143 "arbitration_burst": 0, 00:04:36.143 "low_priority_weight": 0, 00:04:36.143 "medium_priority_weight": 0, 00:04:36.143 "high_priority_weight": 0, 00:04:36.143 "nvme_adminq_poll_period_us": 10000, 00:04:36.143 "nvme_ioq_poll_period_us": 0, 00:04:36.143 "io_queue_requests": 0, 00:04:36.143 "delay_cmd_submit": true, 00:04:36.143 "transport_retry_count": 4, 00:04:36.143 "bdev_retry_count": 3, 00:04:36.143 "transport_ack_timeout": 0, 00:04:36.143 "ctrlr_loss_timeout_sec": 0, 00:04:36.143 "reconnect_delay_sec": 0, 00:04:36.143 "fast_io_fail_timeout_sec": 0, 00:04:36.143 "disable_auto_failback": false, 00:04:36.143 "generate_uuids": false, 00:04:36.143 "transport_tos": 0, 00:04:36.143 "nvme_error_stat": false, 00:04:36.143 "rdma_srq_size": 0, 00:04:36.143 "io_path_stat": false, 00:04:36.143 "allow_accel_sequence": false, 00:04:36.143 "rdma_max_cq_size": 0, 00:04:36.143 "rdma_cm_event_timeout_ms": 0, 00:04:36.143 "dhchap_digests": [ 00:04:36.143 "sha256", 00:04:36.143 "sha384", 00:04:36.143 "sha512" 00:04:36.143 ], 00:04:36.143 "dhchap_dhgroups": [ 00:04:36.143 "null", 00:04:36.143 "ffdhe2048", 00:04:36.143 "ffdhe3072", 00:04:36.143 "ffdhe4096", 00:04:36.143 "ffdhe6144", 00:04:36.143 "ffdhe8192" 00:04:36.143 ] 00:04:36.143 } 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "method": "bdev_nvme_set_hotplug", 00:04:36.143 "params": { 00:04:36.143 "period_us": 100000, 00:04:36.143 "enable": false 00:04:36.143 } 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "method": "bdev_wait_for_examine" 00:04:36.143 } 00:04:36.143 ] 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "subsystem": "scsi", 00:04:36.143 "config": null 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "subsystem": "scheduler", 00:04:36.143 "config": [ 00:04:36.143 { 00:04:36.143 "method": "framework_set_scheduler", 00:04:36.143 "params": { 00:04:36.143 "name": "static" 00:04:36.143 } 00:04:36.143 } 00:04:36.143 ] 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "subsystem": "vhost_scsi", 00:04:36.143 "config": [] 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "subsystem": "vhost_blk", 00:04:36.143 "config": [] 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "subsystem": "ublk", 00:04:36.143 "config": [] 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "subsystem": "nbd", 00:04:36.143 "config": [] 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "subsystem": "nvmf", 00:04:36.143 "config": [ 00:04:36.143 { 00:04:36.143 "method": "nvmf_set_config", 00:04:36.143 "params": { 00:04:36.143 "discovery_filter": "match_any", 00:04:36.143 "admin_cmd_passthru": { 00:04:36.143 "identify_ctrlr": false 00:04:36.143 }, 00:04:36.143 "dhchap_digests": [ 00:04:36.143 "sha256", 00:04:36.143 "sha384", 00:04:36.143 "sha512" 00:04:36.143 ], 00:04:36.143 "dhchap_dhgroups": [ 00:04:36.143 "null", 00:04:36.143 "ffdhe2048", 00:04:36.143 "ffdhe3072", 00:04:36.143 "ffdhe4096", 00:04:36.143 "ffdhe6144", 00:04:36.143 "ffdhe8192" 00:04:36.143 ] 00:04:36.143 } 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "method": "nvmf_set_max_subsystems", 00:04:36.143 "params": { 00:04:36.143 "max_subsystems": 1024 00:04:36.143 } 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "method": "nvmf_set_crdt", 00:04:36.143 "params": { 00:04:36.143 "crdt1": 0, 00:04:36.143 "crdt2": 0, 00:04:36.143 "crdt3": 0 00:04:36.143 } 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "method": "nvmf_create_transport", 00:04:36.143 "params": { 00:04:36.143 "trtype": "TCP", 00:04:36.143 "max_queue_depth": 128, 00:04:36.143 "max_io_qpairs_per_ctrlr": 127, 00:04:36.143 "in_capsule_data_size": 4096, 00:04:36.143 "max_io_size": 131072, 00:04:36.143 "io_unit_size": 131072, 00:04:36.143 "max_aq_depth": 128, 00:04:36.143 "num_shared_buffers": 511, 00:04:36.143 "buf_cache_size": 4294967295, 00:04:36.143 "dif_insert_or_strip": false, 00:04:36.143 "zcopy": false, 00:04:36.143 "c2h_success": true, 00:04:36.143 "sock_priority": 0, 00:04:36.143 "abort_timeout_sec": 1, 00:04:36.143 "ack_timeout": 0, 00:04:36.143 "data_wr_pool_size": 0 00:04:36.143 } 00:04:36.143 } 00:04:36.143 ] 00:04:36.143 }, 00:04:36.143 { 00:04:36.143 "subsystem": "iscsi", 00:04:36.143 "config": [ 00:04:36.143 { 00:04:36.143 "method": "iscsi_set_options", 00:04:36.143 "params": { 00:04:36.143 "node_base": "iqn.2016-06.io.spdk", 00:04:36.143 "max_sessions": 128, 00:04:36.143 "max_connections_per_session": 2, 00:04:36.143 "max_queue_depth": 64, 00:04:36.143 "default_time2wait": 2, 00:04:36.143 "default_time2retain": 20, 00:04:36.143 "first_burst_length": 8192, 00:04:36.143 "immediate_data": true, 00:04:36.143 "allow_duplicated_isid": false, 00:04:36.143 "error_recovery_level": 0, 00:04:36.143 "nop_timeout": 60, 00:04:36.143 "nop_in_interval": 30, 00:04:36.143 "disable_chap": false, 00:04:36.143 "require_chap": false, 00:04:36.143 "mutual_chap": false, 00:04:36.143 "chap_group": 0, 00:04:36.143 "max_large_datain_per_connection": 64, 00:04:36.143 "max_r2t_per_connection": 4, 00:04:36.143 "pdu_pool_size": 36864, 00:04:36.143 "immediate_data_pool_size": 16384, 00:04:36.143 "data_out_pool_size": 2048 00:04:36.143 } 00:04:36.143 } 00:04:36.143 ] 00:04:36.143 } 00:04:36.143 ] 00:04:36.143 } 00:04:36.143 10:32:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:36.143 10:32:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57169 00:04:36.143 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57169 ']' 00:04:36.143 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57169 00:04:36.143 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:36.143 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.143 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57169 00:04:36.402 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.402 killing process with pid 57169 00:04:36.402 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.402 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57169' 00:04:36.402 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57169 00:04:36.402 10:32:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57169 00:04:38.933 10:32:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57219 00:04:38.933 10:32:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:38.933 10:32:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:44.203 10:33:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57219 00:04:44.203 10:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57219 ']' 00:04:44.203 10:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57219 00:04:44.203 10:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:44.203 10:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.203 10:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57219 00:04:44.203 10:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.203 10:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.203 killing process with pid 57219 00:04:44.203 10:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57219' 00:04:44.203 10:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57219 00:04:44.203 10:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57219 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:46.106 00:04:46.106 real 0m11.144s 00:04:46.106 user 0m10.518s 00:04:46.106 sys 0m1.061s 00:04:46.106 ************************************ 00:04:46.106 END TEST skip_rpc_with_json 00:04:46.106 ************************************ 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.106 10:33:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:46.106 10:33:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.106 10:33:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.106 10:33:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.106 ************************************ 00:04:46.106 START TEST skip_rpc_with_delay 00:04:46.106 ************************************ 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:46.106 10:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.106 [2024-11-15 10:33:07.049070] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:46.106 10:33:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:46.106 ************************************ 00:04:46.106 END TEST skip_rpc_with_delay 00:04:46.106 ************************************ 00:04:46.106 10:33:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:46.106 10:33:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:46.106 10:33:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:46.106 00:04:46.106 real 0m0.208s 00:04:46.106 user 0m0.109s 00:04:46.106 sys 0m0.096s 00:04:46.106 10:33:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.106 10:33:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:46.106 10:33:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:46.106 10:33:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:46.106 10:33:07 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:46.106 10:33:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.106 10:33:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.106 10:33:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.106 ************************************ 00:04:46.106 START TEST exit_on_failed_rpc_init 00:04:46.106 ************************************ 00:04:46.106 10:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:46.106 10:33:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57353 00:04:46.106 10:33:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57353 00:04:46.106 10:33:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.106 10:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57353 ']' 00:04:46.106 10:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.106 10:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.106 10:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.106 10:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.106 10:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:46.364 [2024-11-15 10:33:07.304137] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:04:46.364 [2024-11-15 10:33:07.304363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57353 ] 00:04:46.364 [2024-11-15 10:33:07.491531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.621 [2024-11-15 10:33:07.622344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:47.554 10:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:47.554 [2024-11-15 10:33:08.640038] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:04:47.555 [2024-11-15 10:33:08.640240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57371 ] 00:04:47.813 [2024-11-15 10:33:08.833589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.071 [2024-11-15 10:33:08.986707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.071 [2024-11-15 10:33:08.986817] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:48.071 [2024-11-15 10:33:08.986840] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:48.071 [2024-11-15 10:33:08.986859] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57353 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57353 ']' 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57353 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57353 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.329 killing process with pid 57353 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57353' 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57353 00:04:48.329 10:33:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57353 00:04:50.952 00:04:50.952 real 0m4.383s 00:04:50.952 user 0m4.866s 00:04:50.952 sys 0m0.736s 00:04:50.952 10:33:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.952 10:33:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.952 ************************************ 00:04:50.952 END TEST exit_on_failed_rpc_init 00:04:50.952 ************************************ 00:04:50.952 10:33:11 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:50.952 00:04:50.952 real 0m23.402s 00:04:50.952 user 0m22.372s 00:04:50.952 sys 0m2.568s 00:04:50.952 10:33:11 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.952 10:33:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.952 ************************************ 00:04:50.952 END TEST skip_rpc 00:04:50.952 ************************************ 00:04:50.952 10:33:11 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:50.952 10:33:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.952 10:33:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.952 10:33:11 -- common/autotest_common.sh@10 -- # set +x 00:04:50.952 ************************************ 00:04:50.952 START TEST rpc_client 00:04:50.952 ************************************ 00:04:50.952 10:33:11 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:50.952 * Looking for test storage... 00:04:50.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:50.952 10:33:11 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.952 10:33:11 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.952 10:33:11 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.952 10:33:11 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.952 10:33:11 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:50.952 10:33:11 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.952 10:33:11 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.952 --rc genhtml_branch_coverage=1 00:04:50.952 --rc genhtml_function_coverage=1 00:04:50.952 --rc genhtml_legend=1 00:04:50.952 --rc geninfo_all_blocks=1 00:04:50.952 --rc geninfo_unexecuted_blocks=1 00:04:50.952 00:04:50.952 ' 00:04:50.952 10:33:11 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.952 --rc genhtml_branch_coverage=1 00:04:50.952 --rc genhtml_function_coverage=1 00:04:50.952 --rc genhtml_legend=1 00:04:50.952 --rc geninfo_all_blocks=1 00:04:50.952 --rc geninfo_unexecuted_blocks=1 00:04:50.952 00:04:50.952 ' 00:04:50.952 10:33:11 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.952 --rc genhtml_branch_coverage=1 00:04:50.952 --rc genhtml_function_coverage=1 00:04:50.952 --rc genhtml_legend=1 00:04:50.952 --rc geninfo_all_blocks=1 00:04:50.952 --rc geninfo_unexecuted_blocks=1 00:04:50.952 00:04:50.952 ' 00:04:50.952 10:33:11 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.952 --rc genhtml_branch_coverage=1 00:04:50.952 --rc genhtml_function_coverage=1 00:04:50.952 --rc genhtml_legend=1 00:04:50.952 --rc geninfo_all_blocks=1 00:04:50.952 --rc geninfo_unexecuted_blocks=1 00:04:50.952 00:04:50.952 ' 00:04:50.952 10:33:11 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:50.952 OK 00:04:50.952 10:33:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:50.952 00:04:50.952 real 0m0.258s 00:04:50.952 user 0m0.160s 00:04:50.952 sys 0m0.105s 00:04:50.952 10:33:11 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.952 10:33:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:50.952 ************************************ 00:04:50.952 END TEST rpc_client 00:04:50.952 ************************************ 00:04:50.952 10:33:11 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:50.952 10:33:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.952 10:33:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.952 10:33:11 -- common/autotest_common.sh@10 -- # set +x 00:04:50.952 ************************************ 00:04:50.952 START TEST json_config 00:04:50.952 ************************************ 00:04:50.952 10:33:11 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:50.952 10:33:11 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.953 10:33:11 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.953 10:33:11 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.953 10:33:12 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.953 10:33:12 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.953 10:33:12 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.953 10:33:12 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.953 10:33:12 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.953 10:33:12 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.953 10:33:12 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.953 10:33:12 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.953 10:33:12 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.953 10:33:12 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.953 10:33:12 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.953 10:33:12 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.953 10:33:12 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:50.953 10:33:12 json_config -- scripts/common.sh@345 -- # : 1 00:04:50.953 10:33:12 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.953 10:33:12 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.953 10:33:12 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:50.953 10:33:12 json_config -- scripts/common.sh@353 -- # local d=1 00:04:50.953 10:33:12 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.953 10:33:12 json_config -- scripts/common.sh@355 -- # echo 1 00:04:50.953 10:33:12 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.953 10:33:12 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:50.953 10:33:12 json_config -- scripts/common.sh@353 -- # local d=2 00:04:50.953 10:33:12 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.953 10:33:12 json_config -- scripts/common.sh@355 -- # echo 2 00:04:50.953 10:33:12 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.953 10:33:12 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.953 10:33:12 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.953 10:33:12 json_config -- scripts/common.sh@368 -- # return 0 00:04:50.953 10:33:12 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.953 10:33:12 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.953 --rc genhtml_branch_coverage=1 00:04:50.953 --rc genhtml_function_coverage=1 00:04:50.953 --rc genhtml_legend=1 00:04:50.953 --rc geninfo_all_blocks=1 00:04:50.953 --rc geninfo_unexecuted_blocks=1 00:04:50.953 00:04:50.953 ' 00:04:50.953 10:33:12 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.953 --rc genhtml_branch_coverage=1 00:04:50.953 --rc genhtml_function_coverage=1 00:04:50.953 --rc genhtml_legend=1 00:04:50.953 --rc geninfo_all_blocks=1 00:04:50.953 --rc geninfo_unexecuted_blocks=1 00:04:50.953 00:04:50.953 ' 00:04:50.953 10:33:12 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.953 --rc genhtml_branch_coverage=1 00:04:50.953 --rc genhtml_function_coverage=1 00:04:50.953 --rc genhtml_legend=1 00:04:50.953 --rc geninfo_all_blocks=1 00:04:50.953 --rc geninfo_unexecuted_blocks=1 00:04:50.953 00:04:50.953 ' 00:04:50.953 10:33:12 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.953 --rc genhtml_branch_coverage=1 00:04:50.953 --rc genhtml_function_coverage=1 00:04:50.953 --rc genhtml_legend=1 00:04:50.953 --rc geninfo_all_blocks=1 00:04:50.953 --rc geninfo_unexecuted_blocks=1 00:04:50.953 00:04:50.953 ' 00:04:50.953 10:33:12 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7725ba29-e2e6-419d-b1de-67bc0686c209 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7725ba29-e2e6-419d-b1de-67bc0686c209 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.953 10:33:12 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.953 10:33:12 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.953 10:33:12 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.953 10:33:12 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.953 10:33:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.953 10:33:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.953 10:33:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.953 10:33:12 json_config -- paths/export.sh@5 -- # export PATH 00:04:50.953 10:33:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@51 -- # : 0 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.953 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.953 10:33:12 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.953 10:33:12 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:50.953 10:33:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:50.953 10:33:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:50.953 10:33:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:50.953 10:33:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:50.953 WARNING: No tests are enabled so not running JSON configuration tests 00:04:50.953 10:33:12 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:50.953 10:33:12 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:50.953 00:04:50.953 real 0m0.177s 00:04:50.953 user 0m0.115s 00:04:50.953 sys 0m0.067s 00:04:50.953 10:33:12 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.953 10:33:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.953 ************************************ 00:04:50.953 END TEST json_config 00:04:50.953 ************************************ 00:04:51.212 10:33:12 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:51.212 10:33:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.212 10:33:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.212 10:33:12 -- common/autotest_common.sh@10 -- # set +x 00:04:51.212 ************************************ 00:04:51.212 START TEST json_config_extra_key 00:04:51.212 ************************************ 00:04:51.212 10:33:12 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:51.213 10:33:12 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:51.213 10:33:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:51.213 10:33:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:51.213 10:33:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:51.213 10:33:12 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.213 10:33:12 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:51.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.213 --rc genhtml_branch_coverage=1 00:04:51.213 --rc genhtml_function_coverage=1 00:04:51.213 --rc genhtml_legend=1 00:04:51.213 --rc geninfo_all_blocks=1 00:04:51.213 --rc geninfo_unexecuted_blocks=1 00:04:51.213 00:04:51.213 ' 00:04:51.213 10:33:12 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:51.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.213 --rc genhtml_branch_coverage=1 00:04:51.213 --rc genhtml_function_coverage=1 00:04:51.213 --rc genhtml_legend=1 00:04:51.213 --rc geninfo_all_blocks=1 00:04:51.213 --rc geninfo_unexecuted_blocks=1 00:04:51.213 00:04:51.213 ' 00:04:51.213 10:33:12 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:51.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.213 --rc genhtml_branch_coverage=1 00:04:51.213 --rc genhtml_function_coverage=1 00:04:51.213 --rc genhtml_legend=1 00:04:51.213 --rc geninfo_all_blocks=1 00:04:51.213 --rc geninfo_unexecuted_blocks=1 00:04:51.213 00:04:51.213 ' 00:04:51.213 10:33:12 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:51.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.213 --rc genhtml_branch_coverage=1 00:04:51.213 --rc genhtml_function_coverage=1 00:04:51.213 --rc genhtml_legend=1 00:04:51.213 --rc geninfo_all_blocks=1 00:04:51.213 --rc geninfo_unexecuted_blocks=1 00:04:51.213 00:04:51.213 ' 00:04:51.213 10:33:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7725ba29-e2e6-419d-b1de-67bc0686c209 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7725ba29-e2e6-419d-b1de-67bc0686c209 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.213 10:33:12 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.213 10:33:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.213 10:33:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.213 10:33:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.213 10:33:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:51.213 10:33:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.213 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.213 10:33:12 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.213 10:33:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:51.213 10:33:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:51.213 10:33:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:51.213 10:33:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:51.213 10:33:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:51.213 10:33:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:51.213 10:33:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:51.213 10:33:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:51.213 10:33:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:51.213 10:33:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.213 INFO: launching applications... 00:04:51.213 10:33:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:51.213 10:33:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:51.213 10:33:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:51.213 10:33:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:51.213 10:33:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.213 10:33:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.213 10:33:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.213 10:33:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.214 10:33:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.214 10:33:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57581 00:04:51.214 10:33:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.214 Waiting for target to run... 00:04:51.214 10:33:12 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:51.214 10:33:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57581 /var/tmp/spdk_tgt.sock 00:04:51.214 10:33:12 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57581 ']' 00:04:51.214 10:33:12 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.214 10:33:12 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.214 10:33:12 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.214 10:33:12 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.214 10:33:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.472 [2024-11-15 10:33:12.433456] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:04:51.472 [2024-11-15 10:33:12.433630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57581 ] 00:04:52.040 [2024-11-15 10:33:12.891114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.040 [2024-11-15 10:33:13.031230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.606 10:33:13 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.606 00:04:52.606 10:33:13 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:52.606 10:33:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:52.606 INFO: shutting down applications... 00:04:52.606 10:33:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:52.606 10:33:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:52.606 10:33:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:52.606 10:33:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:52.606 10:33:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57581 ]] 00:04:52.606 10:33:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57581 00:04:52.606 10:33:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:52.606 10:33:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.606 10:33:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57581 00:04:52.606 10:33:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.173 10:33:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.173 10:33:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.173 10:33:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57581 00:04:53.173 10:33:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.787 10:33:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.787 10:33:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.787 10:33:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57581 00:04:53.787 10:33:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.354 10:33:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.354 10:33:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.354 10:33:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57581 00:04:54.354 10:33:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.613 10:33:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.613 10:33:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.613 10:33:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57581 00:04:54.613 10:33:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.180 10:33:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.180 10:33:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.180 10:33:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57581 00:04:55.180 10:33:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.745 10:33:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.745 10:33:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.745 10:33:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57581 00:04:55.745 10:33:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:55.745 10:33:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:55.745 10:33:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:55.745 10:33:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:55.745 SPDK target shutdown done 00:04:55.745 Success 00:04:55.745 10:33:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:55.746 00:04:55.746 real 0m4.616s 00:04:55.746 user 0m4.063s 00:04:55.746 sys 0m0.610s 00:04:55.746 10:33:16 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.746 10:33:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:55.746 ************************************ 00:04:55.746 END TEST json_config_extra_key 00:04:55.746 ************************************ 00:04:55.746 10:33:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:55.746 10:33:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.746 10:33:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.746 10:33:16 -- common/autotest_common.sh@10 -- # set +x 00:04:55.746 ************************************ 00:04:55.746 START TEST alias_rpc 00:04:55.746 ************************************ 00:04:55.746 10:33:16 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:55.746 * Looking for test storage... 00:04:55.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:55.746 10:33:16 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.746 10:33:16 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.746 10:33:16 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.006 10:33:16 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.006 10:33:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:56.006 10:33:16 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.006 10:33:16 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.006 --rc genhtml_branch_coverage=1 00:04:56.006 --rc genhtml_function_coverage=1 00:04:56.006 --rc genhtml_legend=1 00:04:56.006 --rc geninfo_all_blocks=1 00:04:56.006 --rc geninfo_unexecuted_blocks=1 00:04:56.006 00:04:56.006 ' 00:04:56.006 10:33:16 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.006 --rc genhtml_branch_coverage=1 00:04:56.006 --rc genhtml_function_coverage=1 00:04:56.006 --rc genhtml_legend=1 00:04:56.006 --rc geninfo_all_blocks=1 00:04:56.006 --rc geninfo_unexecuted_blocks=1 00:04:56.006 00:04:56.006 ' 00:04:56.006 10:33:16 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.006 --rc genhtml_branch_coverage=1 00:04:56.006 --rc genhtml_function_coverage=1 00:04:56.006 --rc genhtml_legend=1 00:04:56.006 --rc geninfo_all_blocks=1 00:04:56.006 --rc geninfo_unexecuted_blocks=1 00:04:56.006 00:04:56.006 ' 00:04:56.006 10:33:16 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.006 --rc genhtml_branch_coverage=1 00:04:56.006 --rc genhtml_function_coverage=1 00:04:56.006 --rc genhtml_legend=1 00:04:56.006 --rc geninfo_all_blocks=1 00:04:56.006 --rc geninfo_unexecuted_blocks=1 00:04:56.006 00:04:56.006 ' 00:04:56.006 10:33:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:56.006 10:33:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57687 00:04:56.006 10:33:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57687 00:04:56.006 10:33:17 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57687 ']' 00:04:56.006 10:33:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.006 10:33:17 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.006 10:33:17 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.006 10:33:17 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.006 10:33:17 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.006 10:33:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.006 [2024-11-15 10:33:17.111128] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:04:56.006 [2024-11-15 10:33:17.111283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57687 ] 00:04:56.265 [2024-11-15 10:33:17.289164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.524 [2024-11-15 10:33:17.450654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.476 10:33:18 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.476 10:33:18 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:57.476 10:33:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:57.476 10:33:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57687 00:04:57.476 10:33:18 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57687 ']' 00:04:57.476 10:33:18 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57687 00:04:57.476 10:33:18 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:57.476 10:33:18 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.476 10:33:18 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57687 00:04:57.745 10:33:18 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.745 10:33:18 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.745 killing process with pid 57687 00:04:57.745 10:33:18 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57687' 00:04:57.745 10:33:18 alias_rpc -- common/autotest_common.sh@973 -- # kill 57687 00:04:57.745 10:33:18 alias_rpc -- common/autotest_common.sh@978 -- # wait 57687 00:05:00.275 00:05:00.275 real 0m4.089s 00:05:00.275 user 0m4.229s 00:05:00.275 sys 0m0.626s 00:05:00.275 10:33:20 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.276 10:33:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.276 ************************************ 00:05:00.276 END TEST alias_rpc 00:05:00.276 ************************************ 00:05:00.276 10:33:20 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:00.276 10:33:20 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:00.276 10:33:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.276 10:33:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.276 10:33:20 -- common/autotest_common.sh@10 -- # set +x 00:05:00.276 ************************************ 00:05:00.276 START TEST spdkcli_tcp 00:05:00.276 ************************************ 00:05:00.276 10:33:20 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:00.276 * Looking for test storage... 00:05:00.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.276 10:33:21 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.276 --rc genhtml_branch_coverage=1 00:05:00.276 --rc genhtml_function_coverage=1 00:05:00.276 --rc genhtml_legend=1 00:05:00.276 --rc geninfo_all_blocks=1 00:05:00.276 --rc geninfo_unexecuted_blocks=1 00:05:00.276 00:05:00.276 ' 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.276 --rc genhtml_branch_coverage=1 00:05:00.276 --rc genhtml_function_coverage=1 00:05:00.276 --rc genhtml_legend=1 00:05:00.276 --rc geninfo_all_blocks=1 00:05:00.276 --rc geninfo_unexecuted_blocks=1 00:05:00.276 00:05:00.276 ' 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.276 --rc genhtml_branch_coverage=1 00:05:00.276 --rc genhtml_function_coverage=1 00:05:00.276 --rc genhtml_legend=1 00:05:00.276 --rc geninfo_all_blocks=1 00:05:00.276 --rc geninfo_unexecuted_blocks=1 00:05:00.276 00:05:00.276 ' 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.276 --rc genhtml_branch_coverage=1 00:05:00.276 --rc genhtml_function_coverage=1 00:05:00.276 --rc genhtml_legend=1 00:05:00.276 --rc geninfo_all_blocks=1 00:05:00.276 --rc geninfo_unexecuted_blocks=1 00:05:00.276 00:05:00.276 ' 00:05:00.276 10:33:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:00.276 10:33:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:00.276 10:33:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:00.276 10:33:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:00.276 10:33:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:00.276 10:33:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:00.276 10:33:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.276 10:33:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57794 00:05:00.276 10:33:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:00.276 10:33:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57794 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57794 ']' 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.276 10:33:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.276 [2024-11-15 10:33:21.290037] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:05:00.276 [2024-11-15 10:33:21.290222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57794 ] 00:05:00.535 [2024-11-15 10:33:21.475868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.535 [2024-11-15 10:33:21.612034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.535 [2024-11-15 10:33:21.612043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.470 10:33:22 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.470 10:33:22 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:01.470 10:33:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57822 00:05:01.470 10:33:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:01.470 10:33:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:01.729 [ 00:05:01.729 "bdev_malloc_delete", 00:05:01.729 "bdev_malloc_create", 00:05:01.729 "bdev_null_resize", 00:05:01.729 "bdev_null_delete", 00:05:01.729 "bdev_null_create", 00:05:01.729 "bdev_nvme_cuse_unregister", 00:05:01.729 "bdev_nvme_cuse_register", 00:05:01.729 "bdev_opal_new_user", 00:05:01.729 "bdev_opal_set_lock_state", 00:05:01.729 "bdev_opal_delete", 00:05:01.729 "bdev_opal_get_info", 00:05:01.729 "bdev_opal_create", 00:05:01.729 "bdev_nvme_opal_revert", 00:05:01.729 "bdev_nvme_opal_init", 00:05:01.729 "bdev_nvme_send_cmd", 00:05:01.729 "bdev_nvme_set_keys", 00:05:01.729 "bdev_nvme_get_path_iostat", 00:05:01.729 "bdev_nvme_get_mdns_discovery_info", 00:05:01.729 "bdev_nvme_stop_mdns_discovery", 00:05:01.729 "bdev_nvme_start_mdns_discovery", 00:05:01.729 "bdev_nvme_set_multipath_policy", 00:05:01.729 "bdev_nvme_set_preferred_path", 00:05:01.729 "bdev_nvme_get_io_paths", 00:05:01.729 "bdev_nvme_remove_error_injection", 00:05:01.729 "bdev_nvme_add_error_injection", 00:05:01.729 "bdev_nvme_get_discovery_info", 00:05:01.729 "bdev_nvme_stop_discovery", 00:05:01.729 "bdev_nvme_start_discovery", 00:05:01.729 "bdev_nvme_get_controller_health_info", 00:05:01.729 "bdev_nvme_disable_controller", 00:05:01.729 "bdev_nvme_enable_controller", 00:05:01.729 "bdev_nvme_reset_controller", 00:05:01.729 "bdev_nvme_get_transport_statistics", 00:05:01.729 "bdev_nvme_apply_firmware", 00:05:01.729 "bdev_nvme_detach_controller", 00:05:01.729 "bdev_nvme_get_controllers", 00:05:01.729 "bdev_nvme_attach_controller", 00:05:01.729 "bdev_nvme_set_hotplug", 00:05:01.729 "bdev_nvme_set_options", 00:05:01.729 "bdev_passthru_delete", 00:05:01.729 "bdev_passthru_create", 00:05:01.729 "bdev_lvol_set_parent_bdev", 00:05:01.729 "bdev_lvol_set_parent", 00:05:01.729 "bdev_lvol_check_shallow_copy", 00:05:01.729 "bdev_lvol_start_shallow_copy", 00:05:01.729 "bdev_lvol_grow_lvstore", 00:05:01.729 "bdev_lvol_get_lvols", 00:05:01.729 "bdev_lvol_get_lvstores", 00:05:01.729 "bdev_lvol_delete", 00:05:01.729 "bdev_lvol_set_read_only", 00:05:01.729 "bdev_lvol_resize", 00:05:01.729 "bdev_lvol_decouple_parent", 00:05:01.729 "bdev_lvol_inflate", 00:05:01.729 "bdev_lvol_rename", 00:05:01.729 "bdev_lvol_clone_bdev", 00:05:01.729 "bdev_lvol_clone", 00:05:01.729 "bdev_lvol_snapshot", 00:05:01.729 "bdev_lvol_create", 00:05:01.729 "bdev_lvol_delete_lvstore", 00:05:01.729 "bdev_lvol_rename_lvstore", 00:05:01.729 "bdev_lvol_create_lvstore", 00:05:01.729 "bdev_raid_set_options", 00:05:01.729 "bdev_raid_remove_base_bdev", 00:05:01.729 "bdev_raid_add_base_bdev", 00:05:01.729 "bdev_raid_delete", 00:05:01.729 "bdev_raid_create", 00:05:01.729 "bdev_raid_get_bdevs", 00:05:01.729 "bdev_error_inject_error", 00:05:01.729 "bdev_error_delete", 00:05:01.729 "bdev_error_create", 00:05:01.729 "bdev_split_delete", 00:05:01.729 "bdev_split_create", 00:05:01.729 "bdev_delay_delete", 00:05:01.729 "bdev_delay_create", 00:05:01.729 "bdev_delay_update_latency", 00:05:01.729 "bdev_zone_block_delete", 00:05:01.729 "bdev_zone_block_create", 00:05:01.729 "blobfs_create", 00:05:01.729 "blobfs_detect", 00:05:01.729 "blobfs_set_cache_size", 00:05:01.729 "bdev_aio_delete", 00:05:01.729 "bdev_aio_rescan", 00:05:01.729 "bdev_aio_create", 00:05:01.729 "bdev_ftl_set_property", 00:05:01.729 "bdev_ftl_get_properties", 00:05:01.729 "bdev_ftl_get_stats", 00:05:01.729 "bdev_ftl_unmap", 00:05:01.729 "bdev_ftl_unload", 00:05:01.729 "bdev_ftl_delete", 00:05:01.729 "bdev_ftl_load", 00:05:01.729 "bdev_ftl_create", 00:05:01.729 "bdev_virtio_attach_controller", 00:05:01.729 "bdev_virtio_scsi_get_devices", 00:05:01.729 "bdev_virtio_detach_controller", 00:05:01.730 "bdev_virtio_blk_set_hotplug", 00:05:01.730 "bdev_iscsi_delete", 00:05:01.730 "bdev_iscsi_create", 00:05:01.730 "bdev_iscsi_set_options", 00:05:01.730 "accel_error_inject_error", 00:05:01.730 "ioat_scan_accel_module", 00:05:01.730 "dsa_scan_accel_module", 00:05:01.730 "iaa_scan_accel_module", 00:05:01.730 "keyring_file_remove_key", 00:05:01.730 "keyring_file_add_key", 00:05:01.730 "keyring_linux_set_options", 00:05:01.730 "fsdev_aio_delete", 00:05:01.730 "fsdev_aio_create", 00:05:01.730 "iscsi_get_histogram", 00:05:01.730 "iscsi_enable_histogram", 00:05:01.730 "iscsi_set_options", 00:05:01.730 "iscsi_get_auth_groups", 00:05:01.730 "iscsi_auth_group_remove_secret", 00:05:01.730 "iscsi_auth_group_add_secret", 00:05:01.730 "iscsi_delete_auth_group", 00:05:01.730 "iscsi_create_auth_group", 00:05:01.730 "iscsi_set_discovery_auth", 00:05:01.730 "iscsi_get_options", 00:05:01.730 "iscsi_target_node_request_logout", 00:05:01.730 "iscsi_target_node_set_redirect", 00:05:01.730 "iscsi_target_node_set_auth", 00:05:01.730 "iscsi_target_node_add_lun", 00:05:01.730 "iscsi_get_stats", 00:05:01.730 "iscsi_get_connections", 00:05:01.730 "iscsi_portal_group_set_auth", 00:05:01.730 "iscsi_start_portal_group", 00:05:01.730 "iscsi_delete_portal_group", 00:05:01.730 "iscsi_create_portal_group", 00:05:01.730 "iscsi_get_portal_groups", 00:05:01.730 "iscsi_delete_target_node", 00:05:01.730 "iscsi_target_node_remove_pg_ig_maps", 00:05:01.730 "iscsi_target_node_add_pg_ig_maps", 00:05:01.730 "iscsi_create_target_node", 00:05:01.730 "iscsi_get_target_nodes", 00:05:01.730 "iscsi_delete_initiator_group", 00:05:01.730 "iscsi_initiator_group_remove_initiators", 00:05:01.730 "iscsi_initiator_group_add_initiators", 00:05:01.730 "iscsi_create_initiator_group", 00:05:01.730 "iscsi_get_initiator_groups", 00:05:01.730 "nvmf_set_crdt", 00:05:01.730 "nvmf_set_config", 00:05:01.730 "nvmf_set_max_subsystems", 00:05:01.730 "nvmf_stop_mdns_prr", 00:05:01.730 "nvmf_publish_mdns_prr", 00:05:01.730 "nvmf_subsystem_get_listeners", 00:05:01.730 "nvmf_subsystem_get_qpairs", 00:05:01.730 "nvmf_subsystem_get_controllers", 00:05:01.730 "nvmf_get_stats", 00:05:01.730 "nvmf_get_transports", 00:05:01.730 "nvmf_create_transport", 00:05:01.730 "nvmf_get_targets", 00:05:01.730 "nvmf_delete_target", 00:05:01.730 "nvmf_create_target", 00:05:01.730 "nvmf_subsystem_allow_any_host", 00:05:01.730 "nvmf_subsystem_set_keys", 00:05:01.730 "nvmf_subsystem_remove_host", 00:05:01.730 "nvmf_subsystem_add_host", 00:05:01.730 "nvmf_ns_remove_host", 00:05:01.730 "nvmf_ns_add_host", 00:05:01.730 "nvmf_subsystem_remove_ns", 00:05:01.730 "nvmf_subsystem_set_ns_ana_group", 00:05:01.730 "nvmf_subsystem_add_ns", 00:05:01.730 "nvmf_subsystem_listener_set_ana_state", 00:05:01.730 "nvmf_discovery_get_referrals", 00:05:01.730 "nvmf_discovery_remove_referral", 00:05:01.730 "nvmf_discovery_add_referral", 00:05:01.730 "nvmf_subsystem_remove_listener", 00:05:01.730 "nvmf_subsystem_add_listener", 00:05:01.730 "nvmf_delete_subsystem", 00:05:01.730 "nvmf_create_subsystem", 00:05:01.730 "nvmf_get_subsystems", 00:05:01.730 "env_dpdk_get_mem_stats", 00:05:01.730 "nbd_get_disks", 00:05:01.730 "nbd_stop_disk", 00:05:01.730 "nbd_start_disk", 00:05:01.730 "ublk_recover_disk", 00:05:01.730 "ublk_get_disks", 00:05:01.730 "ublk_stop_disk", 00:05:01.730 "ublk_start_disk", 00:05:01.730 "ublk_destroy_target", 00:05:01.730 "ublk_create_target", 00:05:01.730 "virtio_blk_create_transport", 00:05:01.730 "virtio_blk_get_transports", 00:05:01.730 "vhost_controller_set_coalescing", 00:05:01.730 "vhost_get_controllers", 00:05:01.730 "vhost_delete_controller", 00:05:01.730 "vhost_create_blk_controller", 00:05:01.730 "vhost_scsi_controller_remove_target", 00:05:01.730 "vhost_scsi_controller_add_target", 00:05:01.730 "vhost_start_scsi_controller", 00:05:01.730 "vhost_create_scsi_controller", 00:05:01.730 "thread_set_cpumask", 00:05:01.730 "scheduler_set_options", 00:05:01.730 "framework_get_governor", 00:05:01.730 "framework_get_scheduler", 00:05:01.730 "framework_set_scheduler", 00:05:01.730 "framework_get_reactors", 00:05:01.730 "thread_get_io_channels", 00:05:01.730 "thread_get_pollers", 00:05:01.730 "thread_get_stats", 00:05:01.730 "framework_monitor_context_switch", 00:05:01.730 "spdk_kill_instance", 00:05:01.730 "log_enable_timestamps", 00:05:01.730 "log_get_flags", 00:05:01.730 "log_clear_flag", 00:05:01.730 "log_set_flag", 00:05:01.730 "log_get_level", 00:05:01.730 "log_set_level", 00:05:01.730 "log_get_print_level", 00:05:01.730 "log_set_print_level", 00:05:01.730 "framework_enable_cpumask_locks", 00:05:01.730 "framework_disable_cpumask_locks", 00:05:01.730 "framework_wait_init", 00:05:01.730 "framework_start_init", 00:05:01.730 "scsi_get_devices", 00:05:01.730 "bdev_get_histogram", 00:05:01.730 "bdev_enable_histogram", 00:05:01.730 "bdev_set_qos_limit", 00:05:01.730 "bdev_set_qd_sampling_period", 00:05:01.730 "bdev_get_bdevs", 00:05:01.730 "bdev_reset_iostat", 00:05:01.730 "bdev_get_iostat", 00:05:01.730 "bdev_examine", 00:05:01.730 "bdev_wait_for_examine", 00:05:01.730 "bdev_set_options", 00:05:01.730 "accel_get_stats", 00:05:01.730 "accel_set_options", 00:05:01.730 "accel_set_driver", 00:05:01.730 "accel_crypto_key_destroy", 00:05:01.730 "accel_crypto_keys_get", 00:05:01.730 "accel_crypto_key_create", 00:05:01.730 "accel_assign_opc", 00:05:01.730 "accel_get_module_info", 00:05:01.730 "accel_get_opc_assignments", 00:05:01.730 "vmd_rescan", 00:05:01.730 "vmd_remove_device", 00:05:01.730 "vmd_enable", 00:05:01.730 "sock_get_default_impl", 00:05:01.730 "sock_set_default_impl", 00:05:01.730 "sock_impl_set_options", 00:05:01.730 "sock_impl_get_options", 00:05:01.730 "iobuf_get_stats", 00:05:01.730 "iobuf_set_options", 00:05:01.730 "keyring_get_keys", 00:05:01.730 "framework_get_pci_devices", 00:05:01.730 "framework_get_config", 00:05:01.730 "framework_get_subsystems", 00:05:01.730 "fsdev_set_opts", 00:05:01.730 "fsdev_get_opts", 00:05:01.730 "trace_get_info", 00:05:01.730 "trace_get_tpoint_group_mask", 00:05:01.730 "trace_disable_tpoint_group", 00:05:01.730 "trace_enable_tpoint_group", 00:05:01.730 "trace_clear_tpoint_mask", 00:05:01.730 "trace_set_tpoint_mask", 00:05:01.730 "notify_get_notifications", 00:05:01.730 "notify_get_types", 00:05:01.730 "spdk_get_version", 00:05:01.730 "rpc_get_methods" 00:05:01.730 ] 00:05:01.730 10:33:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:01.730 10:33:22 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.730 10:33:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.730 10:33:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:01.730 10:33:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57794 00:05:01.730 10:33:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57794 ']' 00:05:01.730 10:33:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57794 00:05:01.730 10:33:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:01.730 10:33:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.730 10:33:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57794 00:05:01.730 10:33:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.730 10:33:22 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.730 killing process with pid 57794 00:05:01.730 10:33:22 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57794' 00:05:01.730 10:33:22 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57794 00:05:01.730 10:33:22 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57794 00:05:04.310 00:05:04.310 real 0m4.123s 00:05:04.310 user 0m7.453s 00:05:04.310 sys 0m0.652s 00:05:04.310 10:33:25 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.310 10:33:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.310 ************************************ 00:05:04.310 END TEST spdkcli_tcp 00:05:04.310 ************************************ 00:05:04.310 10:33:25 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:04.310 10:33:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.310 10:33:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.310 10:33:25 -- common/autotest_common.sh@10 -- # set +x 00:05:04.310 ************************************ 00:05:04.310 START TEST dpdk_mem_utility 00:05:04.310 ************************************ 00:05:04.310 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:04.310 * Looking for test storage... 00:05:04.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:04.310 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.310 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.310 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.310 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.310 10:33:25 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:04.310 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.310 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.310 --rc genhtml_branch_coverage=1 00:05:04.310 --rc genhtml_function_coverage=1 00:05:04.310 --rc genhtml_legend=1 00:05:04.310 --rc geninfo_all_blocks=1 00:05:04.310 --rc geninfo_unexecuted_blocks=1 00:05:04.310 00:05:04.310 ' 00:05:04.310 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.311 --rc genhtml_branch_coverage=1 00:05:04.311 --rc genhtml_function_coverage=1 00:05:04.311 --rc genhtml_legend=1 00:05:04.311 --rc geninfo_all_blocks=1 00:05:04.311 --rc geninfo_unexecuted_blocks=1 00:05:04.311 00:05:04.311 ' 00:05:04.311 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.311 --rc genhtml_branch_coverage=1 00:05:04.311 --rc genhtml_function_coverage=1 00:05:04.311 --rc genhtml_legend=1 00:05:04.311 --rc geninfo_all_blocks=1 00:05:04.311 --rc geninfo_unexecuted_blocks=1 00:05:04.311 00:05:04.311 ' 00:05:04.311 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.311 --rc genhtml_branch_coverage=1 00:05:04.311 --rc genhtml_function_coverage=1 00:05:04.311 --rc genhtml_legend=1 00:05:04.311 --rc geninfo_all_blocks=1 00:05:04.311 --rc geninfo_unexecuted_blocks=1 00:05:04.311 00:05:04.311 ' 00:05:04.311 10:33:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:04.311 10:33:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57916 00:05:04.311 10:33:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57916 00:05:04.311 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57916 ']' 00:05:04.311 10:33:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.311 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.311 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.311 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.311 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.311 10:33:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.311 [2024-11-15 10:33:25.414764] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:05:04.311 [2024-11-15 10:33:25.414944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57916 ] 00:05:04.569 [2024-11-15 10:33:25.601902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.832 [2024-11-15 10:33:25.752897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.767 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.767 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:05.767 10:33:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:05.767 10:33:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:05.767 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.767 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:05.767 { 00:05:05.767 "filename": "/tmp/spdk_mem_dump.txt" 00:05:05.767 } 00:05:05.767 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.767 10:33:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:05.767 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:05.767 1 heaps totaling size 816.000000 MiB 00:05:05.767 size: 816.000000 MiB heap id: 0 00:05:05.767 end heaps---------- 00:05:05.767 9 mempools totaling size 595.772034 MiB 00:05:05.767 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:05.767 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:05.767 size: 92.545471 MiB name: bdev_io_57916 00:05:05.767 size: 50.003479 MiB name: msgpool_57916 00:05:05.767 size: 36.509338 MiB name: fsdev_io_57916 00:05:05.767 size: 21.763794 MiB name: PDU_Pool 00:05:05.767 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:05.767 size: 4.133484 MiB name: evtpool_57916 00:05:05.767 size: 0.026123 MiB name: Session_Pool 00:05:05.767 end mempools------- 00:05:05.767 6 memzones totaling size 4.142822 MiB 00:05:05.767 size: 1.000366 MiB name: RG_ring_0_57916 00:05:05.767 size: 1.000366 MiB name: RG_ring_1_57916 00:05:05.767 size: 1.000366 MiB name: RG_ring_4_57916 00:05:05.767 size: 1.000366 MiB name: RG_ring_5_57916 00:05:05.767 size: 0.125366 MiB name: RG_ring_2_57916 00:05:05.767 size: 0.015991 MiB name: RG_ring_3_57916 00:05:05.767 end memzones------- 00:05:05.767 10:33:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:05.767 heap id: 0 total size: 816.000000 MiB number of busy elements: 313 number of free elements: 18 00:05:05.767 list of free elements. size: 16.791870 MiB 00:05:05.767 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:05.767 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:05.767 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:05.767 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:05.767 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:05.767 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:05.767 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:05.767 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:05.767 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:05.767 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:05.767 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:05.767 element at address: 0x20001ac00000 with size: 0.562439 MiB 00:05:05.767 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:05.768 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:05.768 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:05.768 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:05.768 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:05.768 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:05.768 list of standard malloc elements. size: 199.287231 MiB 00:05:05.768 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:05.768 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:05.768 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:05.768 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:05.768 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:05.768 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:05.768 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:05.768 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:05.768 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:05.768 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:05.768 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:05.768 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:05.768 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:05.769 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:05.769 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:05.769 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:05.769 list of memzone associated elements. size: 599.920898 MiB 00:05:05.769 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:05.769 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:05.769 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:05.769 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:05.769 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:05.769 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57916_0 00:05:05.769 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:05.769 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57916_0 00:05:05.769 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:05.769 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57916_0 00:05:05.769 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:05.769 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:05.769 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:05.769 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:05.769 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:05.769 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57916_0 00:05:05.769 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:05.769 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57916 00:05:05.769 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:05.769 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57916 00:05:05.769 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:05.769 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:05.769 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:05.769 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:05.769 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:05.769 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:05.769 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:05.769 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:05.769 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:05.769 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57916 00:05:05.769 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:05.769 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57916 00:05:05.769 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:05.769 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57916 00:05:05.769 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:05.769 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57916 00:05:05.769 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:05.769 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57916 00:05:05.769 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:05.769 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57916 00:05:05.769 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:05.769 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:05.769 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:05.769 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:05.769 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:05.769 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:05.769 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:05.769 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57916 00:05:05.769 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:05.769 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57916 00:05:05.769 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:05.769 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:05.769 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:05.769 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:05.769 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:05.769 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57916 00:05:05.769 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:05.769 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:05.769 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:05.769 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57916 00:05:05.769 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:05.769 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57916 00:05:05.769 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:05.769 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57916 00:05:05.769 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:05.769 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:05.769 10:33:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:05.770 10:33:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57916 00:05:05.770 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57916 ']' 00:05:05.770 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57916 00:05:05.770 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:05.770 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.770 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57916 00:05:05.770 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.770 killing process with pid 57916 00:05:05.770 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.770 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57916' 00:05:05.770 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57916 00:05:05.770 10:33:26 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57916 00:05:08.309 00:05:08.310 real 0m4.046s 00:05:08.310 user 0m4.140s 00:05:08.310 sys 0m0.636s 00:05:08.310 10:33:29 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.310 10:33:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:08.310 ************************************ 00:05:08.310 END TEST dpdk_mem_utility 00:05:08.310 ************************************ 00:05:08.310 10:33:29 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:08.310 10:33:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.310 10:33:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.310 10:33:29 -- common/autotest_common.sh@10 -- # set +x 00:05:08.310 ************************************ 00:05:08.310 START TEST event 00:05:08.310 ************************************ 00:05:08.310 10:33:29 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:08.310 * Looking for test storage... 00:05:08.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:08.310 10:33:29 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.310 10:33:29 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.310 10:33:29 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.310 10:33:29 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.310 10:33:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.310 10:33:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.310 10:33:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.310 10:33:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.310 10:33:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.310 10:33:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.310 10:33:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.310 10:33:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.310 10:33:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.310 10:33:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.310 10:33:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.310 10:33:29 event -- scripts/common.sh@344 -- # case "$op" in 00:05:08.310 10:33:29 event -- scripts/common.sh@345 -- # : 1 00:05:08.310 10:33:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.310 10:33:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.310 10:33:29 event -- scripts/common.sh@365 -- # decimal 1 00:05:08.310 10:33:29 event -- scripts/common.sh@353 -- # local d=1 00:05:08.310 10:33:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.310 10:33:29 event -- scripts/common.sh@355 -- # echo 1 00:05:08.310 10:33:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.310 10:33:29 event -- scripts/common.sh@366 -- # decimal 2 00:05:08.310 10:33:29 event -- scripts/common.sh@353 -- # local d=2 00:05:08.310 10:33:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.310 10:33:29 event -- scripts/common.sh@355 -- # echo 2 00:05:08.310 10:33:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.310 10:33:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.310 10:33:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.310 10:33:29 event -- scripts/common.sh@368 -- # return 0 00:05:08.310 10:33:29 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.310 10:33:29 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.310 --rc genhtml_branch_coverage=1 00:05:08.310 --rc genhtml_function_coverage=1 00:05:08.310 --rc genhtml_legend=1 00:05:08.310 --rc geninfo_all_blocks=1 00:05:08.310 --rc geninfo_unexecuted_blocks=1 00:05:08.310 00:05:08.310 ' 00:05:08.310 10:33:29 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.310 --rc genhtml_branch_coverage=1 00:05:08.310 --rc genhtml_function_coverage=1 00:05:08.310 --rc genhtml_legend=1 00:05:08.310 --rc geninfo_all_blocks=1 00:05:08.310 --rc geninfo_unexecuted_blocks=1 00:05:08.310 00:05:08.310 ' 00:05:08.310 10:33:29 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.310 --rc genhtml_branch_coverage=1 00:05:08.310 --rc genhtml_function_coverage=1 00:05:08.310 --rc genhtml_legend=1 00:05:08.310 --rc geninfo_all_blocks=1 00:05:08.310 --rc geninfo_unexecuted_blocks=1 00:05:08.310 00:05:08.310 ' 00:05:08.310 10:33:29 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.310 --rc genhtml_branch_coverage=1 00:05:08.310 --rc genhtml_function_coverage=1 00:05:08.310 --rc genhtml_legend=1 00:05:08.310 --rc geninfo_all_blocks=1 00:05:08.310 --rc geninfo_unexecuted_blocks=1 00:05:08.310 00:05:08.310 ' 00:05:08.310 10:33:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:08.310 10:33:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:08.310 10:33:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.310 10:33:29 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:08.310 10:33:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.310 10:33:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.310 ************************************ 00:05:08.310 START TEST event_perf 00:05:08.310 ************************************ 00:05:08.310 10:33:29 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.310 Running I/O for 1 seconds...[2024-11-15 10:33:29.460047] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:05:08.310 [2024-11-15 10:33:29.460293] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58030 ] 00:05:08.569 [2024-11-15 10:33:29.632210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.826 [2024-11-15 10:33:29.772244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.826 [2024-11-15 10:33:29.772365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.826 [2024-11-15 10:33:29.772445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.826 Running I/O for 1 seconds...[2024-11-15 10:33:29.772454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.201 00:05:10.201 lcore 0: 197501 00:05:10.201 lcore 1: 197501 00:05:10.201 lcore 2: 197501 00:05:10.201 lcore 3: 197501 00:05:10.201 done. 00:05:10.201 00:05:10.201 real 0m1.602s 00:05:10.201 user 0m4.358s 00:05:10.201 sys 0m0.117s 00:05:10.201 10:33:31 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.201 10:33:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.201 ************************************ 00:05:10.201 END TEST event_perf 00:05:10.201 ************************************ 00:05:10.201 10:33:31 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:10.201 10:33:31 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:10.201 10:33:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.201 10:33:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.201 ************************************ 00:05:10.201 START TEST event_reactor 00:05:10.201 ************************************ 00:05:10.201 10:33:31 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:10.201 [2024-11-15 10:33:31.107491] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:05:10.201 [2024-11-15 10:33:31.107681] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58069 ] 00:05:10.201 [2024-11-15 10:33:31.280694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.460 [2024-11-15 10:33:31.411475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.835 test_start 00:05:11.835 oneshot 00:05:11.835 tick 100 00:05:11.835 tick 100 00:05:11.835 tick 250 00:05:11.835 tick 100 00:05:11.835 tick 100 00:05:11.835 tick 250 00:05:11.835 tick 100 00:05:11.835 tick 500 00:05:11.835 tick 100 00:05:11.835 tick 100 00:05:11.835 tick 250 00:05:11.835 tick 100 00:05:11.835 tick 100 00:05:11.835 test_end 00:05:11.835 00:05:11.835 real 0m1.575s 00:05:11.835 user 0m1.366s 00:05:11.835 sys 0m0.101s 00:05:11.835 10:33:32 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.835 10:33:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:11.835 ************************************ 00:05:11.835 END TEST event_reactor 00:05:11.835 ************************************ 00:05:11.835 10:33:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.835 10:33:32 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:11.835 10:33:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.836 10:33:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.836 ************************************ 00:05:11.836 START TEST event_reactor_perf 00:05:11.836 ************************************ 00:05:11.836 10:33:32 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.836 [2024-11-15 10:33:32.741838] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:05:11.836 [2024-11-15 10:33:32.742013] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58106 ] 00:05:11.836 [2024-11-15 10:33:32.931095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.094 [2024-11-15 10:33:33.064570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.510 test_start 00:05:13.510 test_end 00:05:13.510 Performance: 283230 events per second 00:05:13.510 ************************************ 00:05:13.510 END TEST event_reactor_perf 00:05:13.510 ************************************ 00:05:13.510 00:05:13.510 real 0m1.601s 00:05:13.510 user 0m1.370s 00:05:13.510 sys 0m0.121s 00:05:13.510 10:33:34 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.510 10:33:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.510 10:33:34 event -- event/event.sh@49 -- # uname -s 00:05:13.510 10:33:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:13.510 10:33:34 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:13.510 10:33:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.510 10:33:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.510 10:33:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.510 ************************************ 00:05:13.510 START TEST event_scheduler 00:05:13.510 ************************************ 00:05:13.510 10:33:34 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:13.510 * Looking for test storage... 00:05:13.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:13.510 10:33:34 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.510 10:33:34 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.510 10:33:34 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.510 10:33:34 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.510 10:33:34 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:13.510 10:33:34 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.510 10:33:34 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.510 --rc genhtml_branch_coverage=1 00:05:13.510 --rc genhtml_function_coverage=1 00:05:13.510 --rc genhtml_legend=1 00:05:13.510 --rc geninfo_all_blocks=1 00:05:13.510 --rc geninfo_unexecuted_blocks=1 00:05:13.510 00:05:13.510 ' 00:05:13.510 10:33:34 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.510 --rc genhtml_branch_coverage=1 00:05:13.510 --rc genhtml_function_coverage=1 00:05:13.510 --rc genhtml_legend=1 00:05:13.510 --rc geninfo_all_blocks=1 00:05:13.510 --rc geninfo_unexecuted_blocks=1 00:05:13.510 00:05:13.510 ' 00:05:13.510 10:33:34 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.510 --rc genhtml_branch_coverage=1 00:05:13.510 --rc genhtml_function_coverage=1 00:05:13.510 --rc genhtml_legend=1 00:05:13.510 --rc geninfo_all_blocks=1 00:05:13.510 --rc geninfo_unexecuted_blocks=1 00:05:13.510 00:05:13.510 ' 00:05:13.510 10:33:34 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.510 --rc genhtml_branch_coverage=1 00:05:13.510 --rc genhtml_function_coverage=1 00:05:13.510 --rc genhtml_legend=1 00:05:13.510 --rc geninfo_all_blocks=1 00:05:13.510 --rc geninfo_unexecuted_blocks=1 00:05:13.510 00:05:13.510 ' 00:05:13.510 10:33:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:13.510 10:33:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58182 00:05:13.510 10:33:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:13.510 10:33:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.510 10:33:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58182 00:05:13.510 10:33:34 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58182 ']' 00:05:13.510 10:33:34 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.510 10:33:34 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.510 10:33:34 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.511 10:33:34 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.511 10:33:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.801 [2024-11-15 10:33:34.669197] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:05:13.801 [2024-11-15 10:33:34.670513] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58182 ] 00:05:13.801 [2024-11-15 10:33:34.858800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:14.059 [2024-11-15 10:33:35.026740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.059 [2024-11-15 10:33:35.026915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.059 [2024-11-15 10:33:35.028406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.059 [2024-11-15 10:33:35.028409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.627 10:33:35 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.627 10:33:35 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:14.627 10:33:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:14.627 10:33:35 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.627 10:33:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.627 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.627 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.627 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.627 POWER: Cannot set governor of lcore 0 to performance 00:05:14.627 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.627 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.627 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.627 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.627 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:14.627 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:14.627 POWER: Unable to set Power Management Environment for lcore 0 00:05:14.627 [2024-11-15 10:33:35.750786] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:14.627 [2024-11-15 10:33:35.750815] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:14.627 [2024-11-15 10:33:35.750828] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:14.627 [2024-11-15 10:33:35.750855] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:14.627 [2024-11-15 10:33:35.750868] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:14.627 [2024-11-15 10:33:35.750896] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:14.627 10:33:35 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.627 10:33:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:14.627 10:33:35 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.627 10:33:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.194 [2024-11-15 10:33:36.092928] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:15.194 10:33:36 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.194 10:33:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:15.194 10:33:36 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.194 10:33:36 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.194 10:33:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.194 ************************************ 00:05:15.194 START TEST scheduler_create_thread 00:05:15.194 ************************************ 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.194 2 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.194 3 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.194 4 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.194 5 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.194 6 00:05:15.194 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.195 7 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.195 8 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.195 9 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.195 10 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.195 10:33:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.572 10:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.572 10:33:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:16.572 10:33:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:16.572 10:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.572 10:33:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.608 ************************************ 00:05:17.608 END TEST scheduler_create_thread 00:05:17.608 ************************************ 00:05:17.608 10:33:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.608 00:05:17.608 real 0m2.621s 00:05:17.608 user 0m0.022s 00:05:17.608 sys 0m0.004s 00:05:17.608 10:33:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.608 10:33:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.865 10:33:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:17.865 10:33:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58182 00:05:17.865 10:33:38 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58182 ']' 00:05:17.865 10:33:38 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58182 00:05:17.865 10:33:38 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:17.865 10:33:38 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.865 10:33:38 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58182 00:05:17.865 killing process with pid 58182 00:05:17.865 10:33:38 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:17.865 10:33:38 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:17.865 10:33:38 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58182' 00:05:17.865 10:33:38 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58182 00:05:17.865 10:33:38 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58182 00:05:18.123 [2024-11-15 10:33:39.208185] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:19.499 00:05:19.499 real 0m5.903s 00:05:19.499 user 0m10.551s 00:05:19.499 sys 0m0.526s 00:05:19.499 ************************************ 00:05:19.499 END TEST event_scheduler 00:05:19.499 ************************************ 00:05:19.499 10:33:40 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.499 10:33:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.499 10:33:40 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:19.499 10:33:40 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:19.499 10:33:40 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.499 10:33:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.499 10:33:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.499 ************************************ 00:05:19.499 START TEST app_repeat 00:05:19.499 ************************************ 00:05:19.499 10:33:40 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:19.499 10:33:40 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.499 10:33:40 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.499 10:33:40 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:19.499 10:33:40 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.499 10:33:40 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:19.499 10:33:40 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:19.499 10:33:40 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:19.499 10:33:40 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58293 00:05:19.499 10:33:40 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.499 10:33:40 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58293' 00:05:19.499 Process app_repeat pid: 58293 00:05:19.499 10:33:40 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:19.499 10:33:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.499 spdk_app_start Round 0 00:05:19.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.499 10:33:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:19.499 10:33:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58293 /var/tmp/spdk-nbd.sock 00:05:19.499 10:33:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58293 ']' 00:05:19.499 10:33:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.499 10:33:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.499 10:33:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.499 10:33:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.499 10:33:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.499 [2024-11-15 10:33:40.387533] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:05:19.499 [2024-11-15 10:33:40.388043] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58293 ] 00:05:19.499 [2024-11-15 10:33:40.574739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.758 [2024-11-15 10:33:40.709733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.758 [2024-11-15 10:33:40.709739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.325 10:33:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.325 10:33:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.325 10:33:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.901 Malloc0 00:05:20.901 10:33:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.172 Malloc1 00:05:21.172 10:33:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.172 10:33:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.172 10:33:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.172 10:33:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.172 10:33:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.172 10:33:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.172 10:33:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.173 10:33:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.173 10:33:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.173 10:33:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.173 10:33:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.173 10:33:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.173 10:33:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.173 10:33:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.173 10:33:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.173 10:33:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.432 /dev/nbd0 00:05:21.432 10:33:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.432 10:33:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.432 10:33:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:21.432 10:33:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.432 10:33:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.432 10:33:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.432 10:33:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:21.432 10:33:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.432 10:33:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.432 10:33:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.432 10:33:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.432 1+0 records in 00:05:21.432 1+0 records out 00:05:21.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286809 s, 14.3 MB/s 00:05:21.432 10:33:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.432 10:33:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.432 10:33:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.432 10:33:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.432 10:33:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.432 10:33:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.432 10:33:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.432 10:33:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.690 /dev/nbd1 00:05:21.690 10:33:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.690 10:33:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.690 10:33:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:21.690 10:33:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.690 10:33:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.690 10:33:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.690 10:33:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:21.690 10:33:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.690 10:33:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.690 10:33:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.690 10:33:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.690 1+0 records in 00:05:21.690 1+0 records out 00:05:21.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424832 s, 9.6 MB/s 00:05:21.690 10:33:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.690 10:33:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.690 10:33:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.691 10:33:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.691 10:33:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.691 10:33:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.691 10:33:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.691 10:33:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.691 10:33:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.691 10:33:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.002 10:33:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.002 { 00:05:22.002 "nbd_device": "/dev/nbd0", 00:05:22.002 "bdev_name": "Malloc0" 00:05:22.002 }, 00:05:22.002 { 00:05:22.002 "nbd_device": "/dev/nbd1", 00:05:22.002 "bdev_name": "Malloc1" 00:05:22.002 } 00:05:22.002 ]' 00:05:22.002 10:33:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.002 { 00:05:22.002 "nbd_device": "/dev/nbd0", 00:05:22.002 "bdev_name": "Malloc0" 00:05:22.002 }, 00:05:22.002 { 00:05:22.002 "nbd_device": "/dev/nbd1", 00:05:22.002 "bdev_name": "Malloc1" 00:05:22.002 } 00:05:22.002 ]' 00:05:22.002 10:33:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.002 10:33:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.002 /dev/nbd1' 00:05:22.002 10:33:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.002 /dev/nbd1' 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.003 256+0 records in 00:05:22.003 256+0 records out 00:05:22.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112283 s, 93.4 MB/s 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.003 256+0 records in 00:05:22.003 256+0 records out 00:05:22.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307574 s, 34.1 MB/s 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.003 256+0 records in 00:05:22.003 256+0 records out 00:05:22.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0345201 s, 30.4 MB/s 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.003 10:33:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.262 10:33:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.522 10:33:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.522 10:33:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.522 10:33:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.522 10:33:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.522 10:33:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.522 10:33:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.522 10:33:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.522 10:33:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.522 10:33:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.522 10:33:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.781 10:33:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.781 10:33:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.781 10:33:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.781 10:33:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.781 10:33:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.781 10:33:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.781 10:33:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.781 10:33:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.781 10:33:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.781 10:33:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.781 10:33:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.039 10:33:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.039 10:33:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.039 10:33:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.299 10:33:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.299 10:33:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.299 10:33:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.299 10:33:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.299 10:33:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.299 10:33:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.299 10:33:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.299 10:33:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.299 10:33:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.299 10:33:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.866 10:33:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.805 [2024-11-15 10:33:45.818756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.805 [2024-11-15 10:33:45.947222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.805 [2024-11-15 10:33:45.947236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.064 [2024-11-15 10:33:46.144743] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.064 [2024-11-15 10:33:46.144864] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.967 10:33:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.967 spdk_app_start Round 1 00:05:26.967 10:33:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:26.967 10:33:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58293 /var/tmp/spdk-nbd.sock 00:05:26.967 10:33:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58293 ']' 00:05:26.967 10:33:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.967 10:33:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.967 10:33:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.967 10:33:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.967 10:33:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.967 10:33:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.967 10:33:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:26.967 10:33:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.226 Malloc0 00:05:27.590 10:33:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.590 Malloc1 00:05:27.590 10:33:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.590 10:33:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.852 /dev/nbd0 00:05:27.852 10:33:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.852 10:33:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.852 10:33:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:27.852 10:33:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.852 10:33:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.852 10:33:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.852 10:33:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:27.852 10:33:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.852 10:33:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.853 10:33:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.853 10:33:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.853 1+0 records in 00:05:27.853 1+0 records out 00:05:27.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293407 s, 14.0 MB/s 00:05:27.853 10:33:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.853 10:33:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.853 10:33:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.853 10:33:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.853 10:33:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.853 10:33:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.853 10:33:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.853 10:33:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.420 /dev/nbd1 00:05:28.420 10:33:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.420 10:33:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.421 10:33:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:28.421 10:33:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:28.421 10:33:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:28.421 10:33:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:28.421 10:33:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:28.421 10:33:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:28.421 10:33:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:28.421 10:33:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:28.421 10:33:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.421 1+0 records in 00:05:28.421 1+0 records out 00:05:28.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297545 s, 13.8 MB/s 00:05:28.421 10:33:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.421 10:33:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:28.421 10:33:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.421 10:33:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:28.421 10:33:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:28.421 10:33:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.421 10:33:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.421 10:33:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.421 10:33:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.421 10:33:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.680 { 00:05:28.680 "nbd_device": "/dev/nbd0", 00:05:28.680 "bdev_name": "Malloc0" 00:05:28.680 }, 00:05:28.680 { 00:05:28.680 "nbd_device": "/dev/nbd1", 00:05:28.680 "bdev_name": "Malloc1" 00:05:28.680 } 00:05:28.680 ]' 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.680 { 00:05:28.680 "nbd_device": "/dev/nbd0", 00:05:28.680 "bdev_name": "Malloc0" 00:05:28.680 }, 00:05:28.680 { 00:05:28.680 "nbd_device": "/dev/nbd1", 00:05:28.680 "bdev_name": "Malloc1" 00:05:28.680 } 00:05:28.680 ]' 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.680 /dev/nbd1' 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.680 /dev/nbd1' 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.680 10:33:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.681 256+0 records in 00:05:28.681 256+0 records out 00:05:28.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00704305 s, 149 MB/s 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.681 256+0 records in 00:05:28.681 256+0 records out 00:05:28.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297614 s, 35.2 MB/s 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.681 256+0 records in 00:05:28.681 256+0 records out 00:05:28.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304354 s, 34.5 MB/s 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.681 10:33:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.248 10:33:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.814 10:33:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.814 10:33:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.814 10:33:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.814 10:33:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.814 10:33:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.814 10:33:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.814 10:33:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.814 10:33:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.814 10:33:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.814 10:33:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.815 10:33:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.815 10:33:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.815 10:33:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.380 10:33:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.357 [2024-11-15 10:33:52.323641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.357 [2024-11-15 10:33:52.457685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.357 [2024-11-15 10:33:52.457688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.640 [2024-11-15 10:33:52.654377] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.640 [2024-11-15 10:33:52.654541] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.543 10:33:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.543 spdk_app_start Round 2 00:05:33.543 10:33:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:33.543 10:33:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58293 /var/tmp/spdk-nbd.sock 00:05:33.543 10:33:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58293 ']' 00:05:33.543 10:33:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.543 10:33:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.543 10:33:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.543 10:33:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.543 10:33:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.543 10:33:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.543 10:33:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:33.543 10:33:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.108 Malloc0 00:05:34.108 10:33:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.365 Malloc1 00:05:34.365 10:33:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.365 10:33:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.365 10:33:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.365 10:33:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.365 10:33:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.365 10:33:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.365 10:33:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.365 10:33:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.365 10:33:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.365 10:33:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.366 10:33:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.366 10:33:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.366 10:33:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:34.366 10:33:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.366 10:33:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.366 10:33:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.624 /dev/nbd0 00:05:34.624 10:33:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.624 10:33:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.624 10:33:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:34.624 10:33:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:34.624 10:33:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:34.624 10:33:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:34.624 10:33:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:34.624 10:33:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:34.624 10:33:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:34.624 10:33:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:34.624 10:33:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.624 1+0 records in 00:05:34.624 1+0 records out 00:05:34.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299452 s, 13.7 MB/s 00:05:34.624 10:33:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.624 10:33:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:34.624 10:33:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.624 10:33:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:34.624 10:33:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:34.624 10:33:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.624 10:33:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.624 10:33:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.192 /dev/nbd1 00:05:35.192 10:33:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.192 10:33:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.192 10:33:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:35.192 10:33:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:35.192 10:33:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:35.192 10:33:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:35.192 10:33:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:35.192 10:33:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:35.192 10:33:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:35.192 10:33:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:35.192 10:33:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.192 1+0 records in 00:05:35.192 1+0 records out 00:05:35.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263369 s, 15.6 MB/s 00:05:35.192 10:33:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.192 10:33:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:35.192 10:33:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.192 10:33:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:35.192 10:33:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:35.192 10:33:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.192 10:33:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.192 10:33:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.192 10:33:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.192 10:33:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.451 { 00:05:35.451 "nbd_device": "/dev/nbd0", 00:05:35.451 "bdev_name": "Malloc0" 00:05:35.451 }, 00:05:35.451 { 00:05:35.451 "nbd_device": "/dev/nbd1", 00:05:35.451 "bdev_name": "Malloc1" 00:05:35.451 } 00:05:35.451 ]' 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.451 { 00:05:35.451 "nbd_device": "/dev/nbd0", 00:05:35.451 "bdev_name": "Malloc0" 00:05:35.451 }, 00:05:35.451 { 00:05:35.451 "nbd_device": "/dev/nbd1", 00:05:35.451 "bdev_name": "Malloc1" 00:05:35.451 } 00:05:35.451 ]' 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.451 /dev/nbd1' 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.451 /dev/nbd1' 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.451 256+0 records in 00:05:35.451 256+0 records out 00:05:35.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0079117 s, 133 MB/s 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.451 256+0 records in 00:05:35.451 256+0 records out 00:05:35.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289974 s, 36.2 MB/s 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.451 256+0 records in 00:05:35.451 256+0 records out 00:05:35.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314202 s, 33.4 MB/s 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.451 10:33:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.452 10:33:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.710 10:33:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.710 10:33:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.710 10:33:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.710 10:33:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.710 10:33:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.710 10:33:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.710 10:33:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.710 10:33:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.710 10:33:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.710 10:33:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.276 10:33:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.276 10:33:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.276 10:33:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.276 10:33:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.276 10:33:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.276 10:33:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.276 10:33:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.276 10:33:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.276 10:33:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.276 10:33:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.276 10:33:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.534 10:33:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.534 10:33:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.534 10:33:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.534 10:33:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.534 10:33:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.534 10:33:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.534 10:33:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.534 10:33:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.534 10:33:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.534 10:33:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.534 10:33:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.534 10:33:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.534 10:33:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.100 10:33:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.042 [2024-11-15 10:33:59.072683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.300 [2024-11-15 10:33:59.202090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.300 [2024-11-15 10:33:59.202101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.300 [2024-11-15 10:33:59.393604] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.300 [2024-11-15 10:33:59.393707] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.199 10:34:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58293 /var/tmp/spdk-nbd.sock 00:05:40.199 10:34:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58293 ']' 00:05:40.199 10:34:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.199 10:34:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.199 10:34:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.199 10:34:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.199 10:34:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.199 10:34:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.199 10:34:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.199 10:34:01 event.app_repeat -- event/event.sh@39 -- # killprocess 58293 00:05:40.199 10:34:01 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58293 ']' 00:05:40.199 10:34:01 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58293 00:05:40.199 10:34:01 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:40.199 10:34:01 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.199 10:34:01 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58293 00:05:40.458 killing process with pid 58293 00:05:40.458 10:34:01 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.458 10:34:01 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.458 10:34:01 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58293' 00:05:40.458 10:34:01 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58293 00:05:40.458 10:34:01 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58293 00:05:41.392 spdk_app_start is called in Round 0. 00:05:41.392 Shutdown signal received, stop current app iteration 00:05:41.392 Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 reinitialization... 00:05:41.392 spdk_app_start is called in Round 1. 00:05:41.392 Shutdown signal received, stop current app iteration 00:05:41.392 Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 reinitialization... 00:05:41.392 spdk_app_start is called in Round 2. 00:05:41.392 Shutdown signal received, stop current app iteration 00:05:41.392 Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 reinitialization... 00:05:41.392 spdk_app_start is called in Round 3. 00:05:41.392 Shutdown signal received, stop current app iteration 00:05:41.392 10:34:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:41.392 10:34:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:41.392 00:05:41.392 real 0m22.031s 00:05:41.392 user 0m48.944s 00:05:41.392 sys 0m3.143s 00:05:41.392 10:34:02 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.392 10:34:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.392 ************************************ 00:05:41.392 END TEST app_repeat 00:05:41.392 ************************************ 00:05:41.392 10:34:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:41.392 10:34:02 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:41.392 10:34:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.392 10:34:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.392 10:34:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.392 ************************************ 00:05:41.392 START TEST cpu_locks 00:05:41.392 ************************************ 00:05:41.392 10:34:02 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:41.392 * Looking for test storage... 00:05:41.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:41.392 10:34:02 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.392 10:34:02 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.392 10:34:02 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.651 10:34:02 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:41.651 10:34:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.652 10:34:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:41.652 10:34:02 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.652 10:34:02 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.652 10:34:02 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.652 10:34:02 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:41.652 10:34:02 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.652 10:34:02 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.652 --rc genhtml_branch_coverage=1 00:05:41.652 --rc genhtml_function_coverage=1 00:05:41.652 --rc genhtml_legend=1 00:05:41.652 --rc geninfo_all_blocks=1 00:05:41.652 --rc geninfo_unexecuted_blocks=1 00:05:41.652 00:05:41.652 ' 00:05:41.652 10:34:02 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.652 --rc genhtml_branch_coverage=1 00:05:41.652 --rc genhtml_function_coverage=1 00:05:41.652 --rc genhtml_legend=1 00:05:41.652 --rc geninfo_all_blocks=1 00:05:41.652 --rc geninfo_unexecuted_blocks=1 00:05:41.652 00:05:41.652 ' 00:05:41.652 10:34:02 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.652 --rc genhtml_branch_coverage=1 00:05:41.652 --rc genhtml_function_coverage=1 00:05:41.652 --rc genhtml_legend=1 00:05:41.652 --rc geninfo_all_blocks=1 00:05:41.652 --rc geninfo_unexecuted_blocks=1 00:05:41.652 00:05:41.652 ' 00:05:41.652 10:34:02 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.652 --rc genhtml_branch_coverage=1 00:05:41.652 --rc genhtml_function_coverage=1 00:05:41.652 --rc genhtml_legend=1 00:05:41.652 --rc geninfo_all_blocks=1 00:05:41.652 --rc geninfo_unexecuted_blocks=1 00:05:41.652 00:05:41.652 ' 00:05:41.652 10:34:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:41.652 10:34:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:41.652 10:34:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:41.652 10:34:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:41.652 10:34:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.652 10:34:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.652 10:34:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.652 ************************************ 00:05:41.652 START TEST default_locks 00:05:41.652 ************************************ 00:05:41.652 10:34:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:41.652 10:34:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58768 00:05:41.652 10:34:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58768 00:05:41.652 10:34:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58768 ']' 00:05:41.652 10:34:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.652 10:34:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.652 10:34:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.652 10:34:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.652 10:34:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.652 10:34:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.652 [2024-11-15 10:34:02.731796] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:05:41.652 [2024-11-15 10:34:02.732057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58768 ] 00:05:41.960 [2024-11-15 10:34:02.923289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.960 [2024-11-15 10:34:03.079995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.895 10:34:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.895 10:34:03 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:42.895 10:34:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58768 00:05:42.895 10:34:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58768 00:05:42.895 10:34:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.462 10:34:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58768 00:05:43.462 10:34:04 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58768 ']' 00:05:43.462 10:34:04 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58768 00:05:43.462 10:34:04 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:43.462 10:34:04 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.462 10:34:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58768 00:05:43.462 10:34:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.462 10:34:04 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.462 killing process with pid 58768 00:05:43.462 10:34:04 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58768' 00:05:43.462 10:34:04 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58768 00:05:43.462 10:34:04 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58768 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58768 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58768 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58768 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58768 ']' 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.009 ERROR: process (pid: 58768) is no longer running 00:05:46.009 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58768) - No such process 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.009 00:05:46.009 real 0m4.010s 00:05:46.009 user 0m4.016s 00:05:46.009 sys 0m0.705s 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.009 ************************************ 00:05:46.009 END TEST default_locks 00:05:46.009 10:34:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.009 ************************************ 00:05:46.009 10:34:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:46.009 10:34:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.009 10:34:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.009 10:34:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.009 ************************************ 00:05:46.009 START TEST default_locks_via_rpc 00:05:46.009 ************************************ 00:05:46.009 10:34:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:46.009 10:34:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58843 00:05:46.009 10:34:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58843 00:05:46.009 10:34:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.009 10:34:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58843 ']' 00:05:46.009 10:34:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.009 10:34:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.009 10:34:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.009 10:34:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.009 10:34:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.009 [2024-11-15 10:34:06.792716] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:05:46.009 [2024-11-15 10:34:06.792937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58843 ] 00:05:46.009 [2024-11-15 10:34:06.976291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.009 [2024-11-15 10:34:07.104830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.945 10:34:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.945 10:34:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.945 10:34:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:46.945 10:34:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.945 10:34:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.945 10:34:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.945 10:34:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:46.945 10:34:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.945 10:34:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.945 10:34:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.945 10:34:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.945 10:34:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.945 10:34:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.945 10:34:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.945 10:34:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58843 00:05:46.945 10:34:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58843 00:05:46.945 10:34:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.512 10:34:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58843 00:05:47.512 10:34:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58843 ']' 00:05:47.512 10:34:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58843 00:05:47.512 10:34:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:47.512 10:34:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.512 10:34:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58843 00:05:47.512 10:34:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.512 10:34:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.512 10:34:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58843' 00:05:47.512 killing process with pid 58843 00:05:47.512 10:34:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58843 00:05:47.512 10:34:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58843 00:05:50.063 00:05:50.063 real 0m4.085s 00:05:50.063 user 0m4.112s 00:05:50.063 sys 0m0.756s 00:05:50.063 ************************************ 00:05:50.063 END TEST default_locks_via_rpc 00:05:50.063 ************************************ 00:05:50.063 10:34:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.063 10:34:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.063 10:34:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:50.063 10:34:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.063 10:34:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.063 10:34:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.063 ************************************ 00:05:50.063 START TEST non_locking_app_on_locked_coremask 00:05:50.063 ************************************ 00:05:50.063 10:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:50.063 10:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58919 00:05:50.063 10:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58919 /var/tmp/spdk.sock 00:05:50.063 10:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.063 10:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58919 ']' 00:05:50.063 10:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.063 10:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.063 10:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.063 10:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.063 10:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.063 [2024-11-15 10:34:10.906145] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:05:50.063 [2024-11-15 10:34:10.906297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58919 ] 00:05:50.063 [2024-11-15 10:34:11.079401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.063 [2024-11-15 10:34:11.208294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.997 10:34:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.997 10:34:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:50.997 10:34:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58935 00:05:50.997 10:34:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:50.997 10:34:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58935 /var/tmp/spdk2.sock 00:05:50.997 10:34:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58935 ']' 00:05:50.997 10:34:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.997 10:34:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.997 10:34:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.997 10:34:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.997 10:34:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.254 [2024-11-15 10:34:12.203890] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:05:51.254 [2024-11-15 10:34:12.204869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58935 ] 00:05:51.254 [2024-11-15 10:34:12.410772] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.254 [2024-11-15 10:34:12.410841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.512 [2024-11-15 10:34:12.668472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.040 10:34:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.040 10:34:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.040 10:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58919 00:05:54.040 10:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.040 10:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58919 00:05:54.975 10:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58919 00:05:54.975 10:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58919 ']' 00:05:54.975 10:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58919 00:05:54.975 10:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:54.975 10:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.975 10:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58919 00:05:54.975 killing process with pid 58919 00:05:54.975 10:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.975 10:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.975 10:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58919' 00:05:54.975 10:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58919 00:05:54.975 10:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58919 00:06:00.240 10:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58935 00:06:00.240 10:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58935 ']' 00:06:00.240 10:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58935 00:06:00.240 10:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.240 10:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.240 10:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58935 00:06:00.240 killing process with pid 58935 00:06:00.240 10:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.240 10:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.240 10:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58935' 00:06:00.240 10:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58935 00:06:00.240 10:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58935 00:06:01.615 ************************************ 00:06:01.615 END TEST non_locking_app_on_locked_coremask 00:06:01.615 ************************************ 00:06:01.615 00:06:01.615 real 0m11.818s 00:06:01.615 user 0m12.499s 00:06:01.615 sys 0m1.489s 00:06:01.615 10:34:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.615 10:34:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.615 10:34:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:01.615 10:34:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.615 10:34:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.615 10:34:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.615 ************************************ 00:06:01.615 START TEST locking_app_on_unlocked_coremask 00:06:01.615 ************************************ 00:06:01.615 10:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:01.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.616 10:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59092 00:06:01.616 10:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59092 /var/tmp/spdk.sock 00:06:01.616 10:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:01.616 10:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59092 ']' 00:06:01.616 10:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.616 10:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.616 10:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.616 10:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.616 10:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.873 [2024-11-15 10:34:22.800177] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:01.873 [2024-11-15 10:34:22.800369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59092 ] 00:06:01.873 [2024-11-15 10:34:22.992250] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.873 [2024-11-15 10:34:22.992349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.132 [2024-11-15 10:34:23.154384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.067 10:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.067 10:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.067 10:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59113 00:06:03.067 10:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.067 10:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59113 /var/tmp/spdk2.sock 00:06:03.067 10:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59113 ']' 00:06:03.067 10:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.067 10:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.067 10:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.067 10:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.067 10:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.067 [2024-11-15 10:34:24.157433] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:03.067 [2024-11-15 10:34:24.157811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59113 ] 00:06:03.326 [2024-11-15 10:34:24.352936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.584 [2024-11-15 10:34:24.624181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.136 10:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.136 10:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.136 10:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59113 00:06:06.136 10:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59113 00:06:06.136 10:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.703 10:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59092 00:06:06.703 10:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59092 ']' 00:06:06.703 10:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59092 00:06:06.703 10:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:06.703 10:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.703 10:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59092 00:06:06.961 killing process with pid 59092 00:06:06.961 10:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.961 10:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.961 10:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59092' 00:06:06.961 10:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59092 00:06:06.961 10:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59092 00:06:12.227 10:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59113 00:06:12.227 10:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59113 ']' 00:06:12.227 10:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59113 00:06:12.227 10:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:12.227 10:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.227 10:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59113 00:06:12.227 killing process with pid 59113 00:06:12.227 10:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.227 10:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.227 10:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59113' 00:06:12.227 10:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59113 00:06:12.227 10:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59113 00:06:13.603 ************************************ 00:06:13.603 END TEST locking_app_on_unlocked_coremask 00:06:13.603 ************************************ 00:06:13.603 00:06:13.603 real 0m11.979s 00:06:13.603 user 0m12.593s 00:06:13.603 sys 0m1.517s 00:06:13.603 10:34:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.603 10:34:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.603 10:34:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:13.603 10:34:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.603 10:34:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.603 10:34:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.603 ************************************ 00:06:13.603 START TEST locking_app_on_locked_coremask 00:06:13.603 ************************************ 00:06:13.603 10:34:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:13.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.603 10:34:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59263 00:06:13.603 10:34:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59263 /var/tmp/spdk.sock 00:06:13.603 10:34:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.603 10:34:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59263 ']' 00:06:13.603 10:34:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.603 10:34:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.603 10:34:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.603 10:34:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.603 10:34:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.862 [2024-11-15 10:34:34.832729] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:13.862 [2024-11-15 10:34:34.832912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59263 ] 00:06:14.121 [2024-11-15 10:34:35.023366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.121 [2024-11-15 10:34:35.183057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59285 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59285 /var/tmp/spdk2.sock 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59285 /var/tmp/spdk2.sock 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59285 /var/tmp/spdk2.sock 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59285 ']' 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.057 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.057 [2024-11-15 10:34:36.208781] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:15.057 [2024-11-15 10:34:36.209196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59285 ] 00:06:15.315 [2024-11-15 10:34:36.410914] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59263 has claimed it. 00:06:15.316 [2024-11-15 10:34:36.410999] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.882 ERROR: process (pid: 59285) is no longer running 00:06:15.882 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59285) - No such process 00:06:15.882 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.882 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:15.882 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:15.882 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.882 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.882 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.882 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59263 00:06:15.882 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59263 00:06:15.882 10:34:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.142 10:34:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59263 00:06:16.142 10:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59263 ']' 00:06:16.142 10:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59263 00:06:16.142 10:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.142 10:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.142 10:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59263 00:06:16.400 killing process with pid 59263 00:06:16.400 10:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.400 10:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.400 10:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59263' 00:06:16.400 10:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59263 00:06:16.400 10:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59263 00:06:18.933 ************************************ 00:06:18.933 END TEST locking_app_on_locked_coremask 00:06:18.933 ************************************ 00:06:18.933 00:06:18.933 real 0m4.869s 00:06:18.933 user 0m5.207s 00:06:18.933 sys 0m0.898s 00:06:18.933 10:34:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.933 10:34:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.933 10:34:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:18.933 10:34:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.933 10:34:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.933 10:34:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.933 ************************************ 00:06:18.933 START TEST locking_overlapped_coremask 00:06:18.934 ************************************ 00:06:18.934 10:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:18.934 10:34:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59354 00:06:18.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.934 10:34:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59354 /var/tmp/spdk.sock 00:06:18.934 10:34:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:18.934 10:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59354 ']' 00:06:18.934 10:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.934 10:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.934 10:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.934 10:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.934 10:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.934 [2024-11-15 10:34:39.749297] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:18.934 [2024-11-15 10:34:39.749530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59354 ] 00:06:18.934 [2024-11-15 10:34:39.935770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.934 [2024-11-15 10:34:40.076877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.934 [2024-11-15 10:34:40.076998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.934 [2024-11-15 10:34:40.077009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59372 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59372 /var/tmp/spdk2.sock 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59372 /var/tmp/spdk2.sock 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59372 /var/tmp/spdk2.sock 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59372 ']' 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.941 10:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.201 [2024-11-15 10:34:41.102862] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:20.201 [2024-11-15 10:34:41.103114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59372 ] 00:06:20.201 [2024-11-15 10:34:41.308269] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59354 has claimed it. 00:06:20.201 [2024-11-15 10:34:41.308368] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.768 ERROR: process (pid: 59372) is no longer running 00:06:20.768 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59372) - No such process 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59354 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59354 ']' 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59354 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59354 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59354' 00:06:20.768 killing process with pid 59354 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59354 00:06:20.768 10:34:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59354 00:06:23.300 00:06:23.300 real 0m4.467s 00:06:23.300 user 0m12.162s 00:06:23.300 sys 0m0.727s 00:06:23.300 10:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.300 10:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.300 ************************************ 00:06:23.300 END TEST locking_overlapped_coremask 00:06:23.300 ************************************ 00:06:23.300 10:34:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:23.300 10:34:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.300 10:34:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.300 10:34:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.300 ************************************ 00:06:23.300 START TEST locking_overlapped_coremask_via_rpc 00:06:23.300 ************************************ 00:06:23.300 10:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:23.300 10:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59442 00:06:23.300 10:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59442 /var/tmp/spdk.sock 00:06:23.300 10:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:23.300 10:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59442 ']' 00:06:23.300 10:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.300 10:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.300 10:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.300 10:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.300 10:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.300 [2024-11-15 10:34:44.254969] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:23.300 [2024-11-15 10:34:44.255141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59442 ] 00:06:23.300 [2024-11-15 10:34:44.431452] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.300 [2024-11-15 10:34:44.431545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.559 [2024-11-15 10:34:44.569181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.559 [2024-11-15 10:34:44.569334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.559 [2024-11-15 10:34:44.569346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.494 10:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.494 10:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:24.494 10:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59460 00:06:24.494 10:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59460 /var/tmp/spdk2.sock 00:06:24.494 10:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:24.494 10:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59460 ']' 00:06:24.494 10:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.494 10:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.494 10:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.494 10:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.494 10:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.494 [2024-11-15 10:34:45.562473] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:24.494 [2024-11-15 10:34:45.562942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59460 ] 00:06:24.752 [2024-11-15 10:34:45.764834] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.752 [2024-11-15 10:34:45.764901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.010 [2024-11-15 10:34:46.039059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.010 [2024-11-15 10:34:46.042635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.010 [2024-11-15 10:34:46.042649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:27.542 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.542 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:27.542 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:27.542 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.542 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.542 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.542 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:27.542 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:27.542 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:27.542 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:27.542 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.543 [2024-11-15 10:34:48.340690] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59442 has claimed it. 00:06:27.543 request: 00:06:27.543 { 00:06:27.543 "method": "framework_enable_cpumask_locks", 00:06:27.543 "req_id": 1 00:06:27.543 } 00:06:27.543 Got JSON-RPC error response 00:06:27.543 response: 00:06:27.543 { 00:06:27.543 "code": -32603, 00:06:27.543 "message": "Failed to claim CPU core: 2" 00:06:27.543 } 00:06:27.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59442 /var/tmp/spdk.sock 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59442 ']' 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59460 /var/tmp/spdk2.sock 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59460 ']' 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.543 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.803 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.803 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:27.803 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:27.803 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.803 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.803 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.803 00:06:27.803 real 0m4.739s 00:06:27.803 user 0m1.818s 00:06:27.803 sys 0m0.223s 00:06:27.803 ************************************ 00:06:27.803 END TEST locking_overlapped_coremask_via_rpc 00:06:27.803 ************************************ 00:06:27.803 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.803 10:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.803 10:34:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:27.803 10:34:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59442 ]] 00:06:27.803 10:34:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59442 00:06:27.803 10:34:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59442 ']' 00:06:27.803 10:34:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59442 00:06:27.803 10:34:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:27.803 10:34:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.803 10:34:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59442 00:06:27.803 killing process with pid 59442 00:06:27.803 10:34:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.803 10:34:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.803 10:34:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59442' 00:06:27.803 10:34:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59442 00:06:27.803 10:34:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59442 00:06:30.335 10:34:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59460 ]] 00:06:30.335 10:34:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59460 00:06:30.335 10:34:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59460 ']' 00:06:30.335 10:34:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59460 00:06:30.335 10:34:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:30.335 10:34:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.335 10:34:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59460 00:06:30.335 killing process with pid 59460 00:06:30.335 10:34:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:30.335 10:34:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:30.335 10:34:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59460' 00:06:30.335 10:34:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59460 00:06:30.335 10:34:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59460 00:06:32.866 10:34:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.866 Process with pid 59442 is not found 00:06:32.866 Process with pid 59460 is not found 00:06:32.866 10:34:53 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:32.866 10:34:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59442 ]] 00:06:32.866 10:34:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59442 00:06:32.866 10:34:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59442 ']' 00:06:32.866 10:34:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59442 00:06:32.866 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59442) - No such process 00:06:32.866 10:34:53 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59442 is not found' 00:06:32.866 10:34:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59460 ]] 00:06:32.866 10:34:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59460 00:06:32.866 10:34:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59460 ']' 00:06:32.866 10:34:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59460 00:06:32.866 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59460) - No such process 00:06:32.866 10:34:53 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59460 is not found' 00:06:32.866 10:34:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.866 00:06:32.866 real 0m51.087s 00:06:32.866 user 1m28.294s 00:06:32.866 sys 0m7.510s 00:06:32.866 10:34:53 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.866 10:34:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.866 ************************************ 00:06:32.866 END TEST cpu_locks 00:06:32.866 ************************************ 00:06:32.866 00:06:32.866 real 1m24.318s 00:06:32.866 user 2m35.105s 00:06:32.866 sys 0m11.783s 00:06:32.866 10:34:53 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.866 10:34:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.866 ************************************ 00:06:32.866 END TEST event 00:06:32.866 ************************************ 00:06:32.866 10:34:53 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:32.866 10:34:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.866 10:34:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.866 10:34:53 -- common/autotest_common.sh@10 -- # set +x 00:06:32.866 ************************************ 00:06:32.866 START TEST thread 00:06:32.866 ************************************ 00:06:32.866 10:34:53 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:32.866 * Looking for test storage... 00:06:32.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:32.866 10:34:53 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:32.866 10:34:53 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:32.866 10:34:53 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:32.866 10:34:53 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:32.866 10:34:53 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.866 10:34:53 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.866 10:34:53 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.866 10:34:53 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.866 10:34:53 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.866 10:34:53 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.866 10:34:53 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.866 10:34:53 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.866 10:34:53 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.866 10:34:53 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.866 10:34:53 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.866 10:34:53 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:32.866 10:34:53 thread -- scripts/common.sh@345 -- # : 1 00:06:32.866 10:34:53 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.866 10:34:53 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.866 10:34:53 thread -- scripts/common.sh@365 -- # decimal 1 00:06:32.866 10:34:53 thread -- scripts/common.sh@353 -- # local d=1 00:06:32.866 10:34:53 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.866 10:34:53 thread -- scripts/common.sh@355 -- # echo 1 00:06:32.866 10:34:53 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.866 10:34:53 thread -- scripts/common.sh@366 -- # decimal 2 00:06:32.866 10:34:53 thread -- scripts/common.sh@353 -- # local d=2 00:06:32.866 10:34:53 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.866 10:34:53 thread -- scripts/common.sh@355 -- # echo 2 00:06:32.866 10:34:53 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.866 10:34:53 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.866 10:34:53 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.866 10:34:53 thread -- scripts/common.sh@368 -- # return 0 00:06:32.866 10:34:53 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.866 10:34:53 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:32.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.866 --rc genhtml_branch_coverage=1 00:06:32.866 --rc genhtml_function_coverage=1 00:06:32.866 --rc genhtml_legend=1 00:06:32.866 --rc geninfo_all_blocks=1 00:06:32.866 --rc geninfo_unexecuted_blocks=1 00:06:32.866 00:06:32.866 ' 00:06:32.866 10:34:53 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:32.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.866 --rc genhtml_branch_coverage=1 00:06:32.866 --rc genhtml_function_coverage=1 00:06:32.866 --rc genhtml_legend=1 00:06:32.866 --rc geninfo_all_blocks=1 00:06:32.866 --rc geninfo_unexecuted_blocks=1 00:06:32.866 00:06:32.866 ' 00:06:32.866 10:34:53 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:32.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.866 --rc genhtml_branch_coverage=1 00:06:32.866 --rc genhtml_function_coverage=1 00:06:32.866 --rc genhtml_legend=1 00:06:32.866 --rc geninfo_all_blocks=1 00:06:32.866 --rc geninfo_unexecuted_blocks=1 00:06:32.866 00:06:32.866 ' 00:06:32.866 10:34:53 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:32.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.866 --rc genhtml_branch_coverage=1 00:06:32.866 --rc genhtml_function_coverage=1 00:06:32.866 --rc genhtml_legend=1 00:06:32.866 --rc geninfo_all_blocks=1 00:06:32.866 --rc geninfo_unexecuted_blocks=1 00:06:32.866 00:06:32.866 ' 00:06:32.866 10:34:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.866 10:34:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:32.866 10:34:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.866 10:34:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.866 ************************************ 00:06:32.866 START TEST thread_poller_perf 00:06:32.866 ************************************ 00:06:32.866 10:34:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.867 [2024-11-15 10:34:53.801955] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:32.867 [2024-11-15 10:34:53.802133] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59655 ] 00:06:32.867 [2024-11-15 10:34:53.983469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.125 [2024-11-15 10:34:54.134753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.125 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:34.507 [2024-11-15T10:34:55.669Z] ====================================== 00:06:34.507 [2024-11-15T10:34:55.669Z] busy:2214407136 (cyc) 00:06:34.507 [2024-11-15T10:34:55.669Z] total_run_count: 302000 00:06:34.507 [2024-11-15T10:34:55.669Z] tsc_hz: 2200000000 (cyc) 00:06:34.507 [2024-11-15T10:34:55.669Z] ====================================== 00:06:34.507 [2024-11-15T10:34:55.669Z] poller_cost: 7332 (cyc), 3332 (nsec) 00:06:34.507 00:06:34.507 real 0m1.612s 00:06:34.507 user 0m1.404s 00:06:34.507 sys 0m0.100s 00:06:34.507 10:34:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.507 10:34:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.507 ************************************ 00:06:34.507 END TEST thread_poller_perf 00:06:34.507 ************************************ 00:06:34.507 10:34:55 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.507 10:34:55 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:34.507 10:34:55 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.507 10:34:55 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.507 ************************************ 00:06:34.507 START TEST thread_poller_perf 00:06:34.507 ************************************ 00:06:34.507 10:34:55 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.507 [2024-11-15 10:34:55.466130] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:34.507 [2024-11-15 10:34:55.466282] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59697 ] 00:06:34.507 [2024-11-15 10:34:55.648278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.766 [2024-11-15 10:34:55.777836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.766 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:36.144 [2024-11-15T10:34:57.306Z] ====================================== 00:06:36.144 [2024-11-15T10:34:57.306Z] busy:2204156108 (cyc) 00:06:36.144 [2024-11-15T10:34:57.306Z] total_run_count: 3722000 00:06:36.144 [2024-11-15T10:34:57.306Z] tsc_hz: 2200000000 (cyc) 00:06:36.144 [2024-11-15T10:34:57.306Z] ====================================== 00:06:36.144 [2024-11-15T10:34:57.306Z] poller_cost: 592 (cyc), 269 (nsec) 00:06:36.144 00:06:36.144 real 0m1.595s 00:06:36.144 user 0m1.385s 00:06:36.144 sys 0m0.100s 00:06:36.144 10:34:57 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.144 ************************************ 00:06:36.144 END TEST thread_poller_perf 00:06:36.144 ************************************ 00:06:36.144 10:34:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.144 10:34:57 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:36.144 00:06:36.144 real 0m3.477s 00:06:36.144 user 0m2.925s 00:06:36.144 sys 0m0.329s 00:06:36.144 10:34:57 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.144 10:34:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.144 ************************************ 00:06:36.144 END TEST thread 00:06:36.144 ************************************ 00:06:36.144 10:34:57 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:36.144 10:34:57 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:36.144 10:34:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.144 10:34:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.144 10:34:57 -- common/autotest_common.sh@10 -- # set +x 00:06:36.144 ************************************ 00:06:36.144 START TEST app_cmdline 00:06:36.144 ************************************ 00:06:36.144 10:34:57 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:36.144 * Looking for test storage... 00:06:36.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:36.144 10:34:57 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.144 10:34:57 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.144 10:34:57 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.144 10:34:57 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.144 10:34:57 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:36.144 10:34:57 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.144 10:34:57 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.144 --rc genhtml_branch_coverage=1 00:06:36.144 --rc genhtml_function_coverage=1 00:06:36.144 --rc genhtml_legend=1 00:06:36.144 --rc geninfo_all_blocks=1 00:06:36.144 --rc geninfo_unexecuted_blocks=1 00:06:36.144 00:06:36.144 ' 00:06:36.144 10:34:57 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.144 --rc genhtml_branch_coverage=1 00:06:36.145 --rc genhtml_function_coverage=1 00:06:36.145 --rc genhtml_legend=1 00:06:36.145 --rc geninfo_all_blocks=1 00:06:36.145 --rc geninfo_unexecuted_blocks=1 00:06:36.145 00:06:36.145 ' 00:06:36.145 10:34:57 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.145 --rc genhtml_branch_coverage=1 00:06:36.145 --rc genhtml_function_coverage=1 00:06:36.145 --rc genhtml_legend=1 00:06:36.145 --rc geninfo_all_blocks=1 00:06:36.145 --rc geninfo_unexecuted_blocks=1 00:06:36.145 00:06:36.145 ' 00:06:36.145 10:34:57 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.145 --rc genhtml_branch_coverage=1 00:06:36.145 --rc genhtml_function_coverage=1 00:06:36.145 --rc genhtml_legend=1 00:06:36.145 --rc geninfo_all_blocks=1 00:06:36.145 --rc geninfo_unexecuted_blocks=1 00:06:36.145 00:06:36.145 ' 00:06:36.145 10:34:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:36.145 10:34:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59775 00:06:36.145 10:34:57 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:36.145 10:34:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59775 00:06:36.403 10:34:57 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59775 ']' 00:06:36.403 10:34:57 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.403 10:34:57 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.403 10:34:57 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.403 10:34:57 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.403 10:34:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:36.403 [2024-11-15 10:34:57.464116] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:36.403 [2024-11-15 10:34:57.464482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59775 ] 00:06:36.662 [2024-11-15 10:34:57.655574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.662 [2024-11-15 10:34:57.811301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.596 10:34:58 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.596 10:34:58 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:37.596 10:34:58 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:37.854 { 00:06:37.854 "version": "SPDK v25.01-pre git sha1 e081e4a1a", 00:06:37.854 "fields": { 00:06:37.854 "major": 25, 00:06:37.854 "minor": 1, 00:06:37.854 "patch": 0, 00:06:37.854 "suffix": "-pre", 00:06:37.854 "commit": "e081e4a1a" 00:06:37.854 } 00:06:37.854 } 00:06:37.854 10:34:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:37.854 10:34:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:37.854 10:34:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:37.854 10:34:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:37.854 10:34:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:37.854 10:34:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:37.854 10:34:58 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.854 10:34:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.854 10:34:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:37.854 10:34:59 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.113 10:34:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:38.113 10:34:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:38.113 10:34:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:38.113 10:34:59 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:38.113 10:34:59 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:38.113 10:34:59 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:38.113 10:34:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.113 10:34:59 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:38.113 10:34:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.113 10:34:59 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:38.113 10:34:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.113 10:34:59 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:38.113 10:34:59 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:38.113 10:34:59 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:38.372 request: 00:06:38.372 { 00:06:38.372 "method": "env_dpdk_get_mem_stats", 00:06:38.372 "req_id": 1 00:06:38.372 } 00:06:38.372 Got JSON-RPC error response 00:06:38.372 response: 00:06:38.372 { 00:06:38.372 "code": -32601, 00:06:38.372 "message": "Method not found" 00:06:38.372 } 00:06:38.372 10:34:59 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:38.372 10:34:59 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.372 10:34:59 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.372 10:34:59 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.372 10:34:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59775 00:06:38.372 10:34:59 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59775 ']' 00:06:38.372 10:34:59 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59775 00:06:38.372 10:34:59 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:38.372 10:34:59 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.372 10:34:59 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59775 00:06:38.372 killing process with pid 59775 00:06:38.372 10:34:59 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.372 10:34:59 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.372 10:34:59 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59775' 00:06:38.372 10:34:59 app_cmdline -- common/autotest_common.sh@973 -- # kill 59775 00:06:38.372 10:34:59 app_cmdline -- common/autotest_common.sh@978 -- # wait 59775 00:06:40.902 00:06:40.902 real 0m4.437s 00:06:40.902 user 0m4.874s 00:06:40.902 sys 0m0.714s 00:06:40.902 10:35:01 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.902 ************************************ 00:06:40.902 END TEST app_cmdline 00:06:40.902 ************************************ 00:06:40.902 10:35:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.902 10:35:01 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:40.902 10:35:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.902 10:35:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.902 10:35:01 -- common/autotest_common.sh@10 -- # set +x 00:06:40.902 ************************************ 00:06:40.902 START TEST version 00:06:40.902 ************************************ 00:06:40.902 10:35:01 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:40.902 * Looking for test storage... 00:06:40.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:40.902 10:35:01 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.902 10:35:01 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.902 10:35:01 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.902 10:35:01 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.902 10:35:01 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.902 10:35:01 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.902 10:35:01 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.902 10:35:01 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.902 10:35:01 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.902 10:35:01 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.902 10:35:01 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.902 10:35:01 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.902 10:35:01 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.902 10:35:01 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.902 10:35:01 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.902 10:35:01 version -- scripts/common.sh@344 -- # case "$op" in 00:06:40.902 10:35:01 version -- scripts/common.sh@345 -- # : 1 00:06:40.902 10:35:01 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.902 10:35:01 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.902 10:35:01 version -- scripts/common.sh@365 -- # decimal 1 00:06:40.902 10:35:01 version -- scripts/common.sh@353 -- # local d=1 00:06:40.902 10:35:01 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.902 10:35:01 version -- scripts/common.sh@355 -- # echo 1 00:06:40.902 10:35:01 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.902 10:35:01 version -- scripts/common.sh@366 -- # decimal 2 00:06:40.902 10:35:01 version -- scripts/common.sh@353 -- # local d=2 00:06:40.902 10:35:01 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.902 10:35:01 version -- scripts/common.sh@355 -- # echo 2 00:06:40.902 10:35:01 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.902 10:35:01 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.902 10:35:01 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.902 10:35:01 version -- scripts/common.sh@368 -- # return 0 00:06:40.902 10:35:01 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.902 10:35:01 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.902 --rc genhtml_branch_coverage=1 00:06:40.902 --rc genhtml_function_coverage=1 00:06:40.902 --rc genhtml_legend=1 00:06:40.902 --rc geninfo_all_blocks=1 00:06:40.902 --rc geninfo_unexecuted_blocks=1 00:06:40.902 00:06:40.902 ' 00:06:40.902 10:35:01 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.902 --rc genhtml_branch_coverage=1 00:06:40.902 --rc genhtml_function_coverage=1 00:06:40.902 --rc genhtml_legend=1 00:06:40.902 --rc geninfo_all_blocks=1 00:06:40.902 --rc geninfo_unexecuted_blocks=1 00:06:40.902 00:06:40.902 ' 00:06:40.902 10:35:01 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.902 --rc genhtml_branch_coverage=1 00:06:40.902 --rc genhtml_function_coverage=1 00:06:40.902 --rc genhtml_legend=1 00:06:40.902 --rc geninfo_all_blocks=1 00:06:40.902 --rc geninfo_unexecuted_blocks=1 00:06:40.902 00:06:40.902 ' 00:06:40.902 10:35:01 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.902 --rc genhtml_branch_coverage=1 00:06:40.902 --rc genhtml_function_coverage=1 00:06:40.902 --rc genhtml_legend=1 00:06:40.902 --rc geninfo_all_blocks=1 00:06:40.902 --rc geninfo_unexecuted_blocks=1 00:06:40.902 00:06:40.902 ' 00:06:40.902 10:35:01 version -- app/version.sh@17 -- # get_header_version major 00:06:40.902 10:35:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.902 10:35:01 version -- app/version.sh@14 -- # cut -f2 00:06:40.902 10:35:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.902 10:35:01 version -- app/version.sh@17 -- # major=25 00:06:40.902 10:35:01 version -- app/version.sh@18 -- # get_header_version minor 00:06:40.902 10:35:01 version -- app/version.sh@14 -- # cut -f2 00:06:40.902 10:35:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.902 10:35:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.902 10:35:01 version -- app/version.sh@18 -- # minor=1 00:06:40.902 10:35:01 version -- app/version.sh@19 -- # get_header_version patch 00:06:40.902 10:35:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.902 10:35:01 version -- app/version.sh@14 -- # cut -f2 00:06:40.902 10:35:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.902 10:35:01 version -- app/version.sh@19 -- # patch=0 00:06:40.902 10:35:01 version -- app/version.sh@20 -- # get_header_version suffix 00:06:40.902 10:35:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.902 10:35:01 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.902 10:35:01 version -- app/version.sh@14 -- # cut -f2 00:06:40.902 10:35:01 version -- app/version.sh@20 -- # suffix=-pre 00:06:40.902 10:35:01 version -- app/version.sh@22 -- # version=25.1 00:06:40.902 10:35:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:40.902 10:35:01 version -- app/version.sh@28 -- # version=25.1rc0 00:06:40.902 10:35:01 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:40.902 10:35:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:40.902 10:35:01 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:40.902 10:35:01 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:40.902 00:06:40.902 real 0m0.247s 00:06:40.902 user 0m0.150s 00:06:40.902 sys 0m0.132s 00:06:40.902 ************************************ 00:06:40.902 END TEST version 00:06:40.902 ************************************ 00:06:40.902 10:35:01 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.902 10:35:01 version -- common/autotest_common.sh@10 -- # set +x 00:06:40.902 10:35:01 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:40.902 10:35:01 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:40.902 10:35:01 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:40.902 10:35:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.902 10:35:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.903 10:35:01 -- common/autotest_common.sh@10 -- # set +x 00:06:40.903 ************************************ 00:06:40.903 START TEST bdev_raid 00:06:40.903 ************************************ 00:06:40.903 10:35:01 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:40.903 * Looking for test storage... 00:06:40.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:40.903 10:35:01 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.903 10:35:01 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.903 10:35:01 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.161 10:35:02 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.161 10:35:02 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:41.161 10:35:02 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.161 10:35:02 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.161 --rc genhtml_branch_coverage=1 00:06:41.161 --rc genhtml_function_coverage=1 00:06:41.161 --rc genhtml_legend=1 00:06:41.161 --rc geninfo_all_blocks=1 00:06:41.161 --rc geninfo_unexecuted_blocks=1 00:06:41.161 00:06:41.161 ' 00:06:41.161 10:35:02 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.161 --rc genhtml_branch_coverage=1 00:06:41.161 --rc genhtml_function_coverage=1 00:06:41.161 --rc genhtml_legend=1 00:06:41.161 --rc geninfo_all_blocks=1 00:06:41.161 --rc geninfo_unexecuted_blocks=1 00:06:41.161 00:06:41.161 ' 00:06:41.161 10:35:02 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.161 --rc genhtml_branch_coverage=1 00:06:41.161 --rc genhtml_function_coverage=1 00:06:41.161 --rc genhtml_legend=1 00:06:41.161 --rc geninfo_all_blocks=1 00:06:41.161 --rc geninfo_unexecuted_blocks=1 00:06:41.161 00:06:41.161 ' 00:06:41.161 10:35:02 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.161 --rc genhtml_branch_coverage=1 00:06:41.161 --rc genhtml_function_coverage=1 00:06:41.161 --rc genhtml_legend=1 00:06:41.161 --rc geninfo_all_blocks=1 00:06:41.161 --rc geninfo_unexecuted_blocks=1 00:06:41.161 00:06:41.161 ' 00:06:41.161 10:35:02 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:41.161 10:35:02 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:41.161 10:35:02 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:41.161 10:35:02 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:41.161 10:35:02 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:41.161 10:35:02 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:41.161 10:35:02 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:41.161 10:35:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.161 10:35:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.161 10:35:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:41.161 ************************************ 00:06:41.161 START TEST raid1_resize_data_offset_test 00:06:41.161 ************************************ 00:06:41.161 10:35:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:41.161 10:35:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59968 00:06:41.161 10:35:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59968' 00:06:41.161 Process raid pid: 59968 00:06:41.161 10:35:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:41.161 10:35:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59968 00:06:41.161 10:35:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59968 ']' 00:06:41.161 10:35:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.161 10:35:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.161 10:35:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.161 10:35:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.161 10:35:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.161 [2024-11-15 10:35:02.203743] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:41.161 [2024-11-15 10:35:02.204829] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.419 [2024-11-15 10:35:02.395933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.419 [2024-11-15 10:35:02.528011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.677 [2024-11-15 10:35:02.735767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.677 [2024-11-15 10:35:02.735839] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.245 malloc0 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.245 malloc1 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.245 null0 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.245 [2024-11-15 10:35:03.378117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:42.245 [2024-11-15 10:35:03.380683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:42.245 [2024-11-15 10:35:03.380759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:42.245 [2024-11-15 10:35:03.380947] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:42.245 [2024-11-15 10:35:03.380971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:42.245 [2024-11-15 10:35:03.381307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:42.245 [2024-11-15 10:35:03.381702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:42.245 [2024-11-15 10:35:03.381766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:42.245 [2024-11-15 10:35:03.382189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:42.245 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.503 10:35:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:42.503 10:35:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:42.503 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.503 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.503 [2024-11-15 10:35:03.438204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:42.503 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.503 10:35:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:42.503 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.503 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.071 malloc2 00:06:43.071 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.071 10:35:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:43.071 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.071 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.071 [2024-11-15 10:35:03.979615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:43.071 [2024-11-15 10:35:03.996748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:43.071 10:35:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.071 [2024-11-15 10:35:03.999446] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:43.071 10:35:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.071 10:35:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59968 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59968 ']' 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59968 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59968 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.071 killing process with pid 59968 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59968' 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59968 00:06:43.071 10:35:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59968 00:06:43.071 [2024-11-15 10:35:04.076437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.071 [2024-11-15 10:35:04.078511] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:43.071 [2024-11-15 10:35:04.078591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.071 [2024-11-15 10:35:04.078618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:43.071 [2024-11-15 10:35:04.109712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.071 [2024-11-15 10:35:04.110278] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.071 [2024-11-15 10:35:04.110315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:44.988 [2024-11-15 10:35:05.747583] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:45.925 ************************************ 00:06:45.925 END TEST raid1_resize_data_offset_test 00:06:45.925 ************************************ 00:06:45.925 10:35:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:45.925 00:06:45.925 real 0m4.682s 00:06:45.925 user 0m4.644s 00:06:45.925 sys 0m0.624s 00:06:45.925 10:35:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.925 10:35:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.925 10:35:06 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:45.925 10:35:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.925 10:35:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.925 10:35:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:45.925 ************************************ 00:06:45.925 START TEST raid0_resize_superblock_test 00:06:45.925 ************************************ 00:06:45.925 10:35:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:45.925 10:35:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:45.925 10:35:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60052 00:06:45.925 Process raid pid: 60052 00:06:45.925 10:35:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60052' 00:06:45.925 10:35:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60052 00:06:45.925 10:35:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60052 ']' 00:06:45.925 10:35:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:45.925 10:35:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.925 10:35:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.925 10:35:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.925 10:35:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.925 10:35:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.925 [2024-11-15 10:35:06.934760] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:45.925 [2024-11-15 10:35:06.934937] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.184 [2024-11-15 10:35:07.120105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.185 [2024-11-15 10:35:07.253340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.443 [2024-11-15 10:35:07.461049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.443 [2024-11-15 10:35:07.461112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.009 10:35:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.009 10:35:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:47.009 10:35:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:47.009 10:35:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.009 10:35:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.575 malloc0 00:06:47.575 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.575 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:47.575 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.575 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.575 [2024-11-15 10:35:08.444529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:47.575 [2024-11-15 10:35:08.444610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.575 [2024-11-15 10:35:08.444654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:47.575 [2024-11-15 10:35:08.444677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.575 [2024-11-15 10:35:08.447510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.575 [2024-11-15 10:35:08.447555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:47.575 pt0 00:06:47.575 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.575 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:47.575 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.575 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.575 1eff5252-a8a3-4421-8aa9-389f6fa3e42e 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.576 20a69415-7c26-4e3f-b71e-b88926720884 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.576 06c655e6-3981-493a-a0f7-60136b5d00ae 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.576 [2024-11-15 10:35:08.589360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 20a69415-7c26-4e3f-b71e-b88926720884 is claimed 00:06:47.576 [2024-11-15 10:35:08.589482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 06c655e6-3981-493a-a0f7-60136b5d00ae is claimed 00:06:47.576 [2024-11-15 10:35:08.589696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:47.576 [2024-11-15 10:35:08.589723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:47.576 [2024-11-15 10:35:08.590046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:47.576 [2024-11-15 10:35:08.590298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:47.576 [2024-11-15 10:35:08.590318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:47.576 [2024-11-15 10:35:08.590532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.576 [2024-11-15 10:35:08.697635] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:47.576 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.866 [2024-11-15 10:35:08.745620] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:47.866 [2024-11-15 10:35:08.745654] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '20a69415-7c26-4e3f-b71e-b88926720884' was resized: old size 131072, new size 204800 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.866 [2024-11-15 10:35:08.753488] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:47.866 [2024-11-15 10:35:08.753533] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '06c655e6-3981-493a-a0f7-60136b5d00ae' was resized: old size 131072, new size 204800 00:06:47.866 [2024-11-15 10:35:08.753577] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.866 [2024-11-15 10:35:08.873729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.866 [2024-11-15 10:35:08.949464] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:47.866 [2024-11-15 10:35:08.949588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:47.866 [2024-11-15 10:35:08.949612] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:47.866 [2024-11-15 10:35:08.949637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:47.866 [2024-11-15 10:35:08.949776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.866 [2024-11-15 10:35:08.949827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:47.866 [2024-11-15 10:35:08.949847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:47.866 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.867 [2024-11-15 10:35:08.957357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:47.867 [2024-11-15 10:35:08.957427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.867 [2024-11-15 10:35:08.957458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:47.867 [2024-11-15 10:35:08.957480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.867 [2024-11-15 10:35:08.960326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.867 [2024-11-15 10:35:08.960528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:47.867 pt0 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.867 [2024-11-15 10:35:08.962847] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 20a69415-7c26-4e3f-b71e-b88926720884 00:06:47.867 [2024-11-15 10:35:08.962922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 20a69415-7c26-4e3f-b71e-b88926720884 is claimed 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.867 [2024-11-15 10:35:08.963066] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 06c655e6-3981-493a-a0f7-60136b5d00ae 00:06:47.867 [2024-11-15 10:35:08.963103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 06c655e6-3981-493a-a0f7-60136b5d00ae is claimed 00:06:47.867 [2024-11-15 10:35:08.963262] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 06c655e6-3981-493a-a0f7-60136b5d00ae (2) smaller than existing raid bdev Raid (3) 00:06:47.867 [2024-11-15 10:35:08.963299] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 20a69415-7c26-4e3f-b71e-b88926720884: File exists 00:06:47.867 [2024-11-15 10:35:08.963359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:47.867 [2024-11-15 10:35:08.963378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:47.867 [2024-11-15 10:35:08.963721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:47.867 [2024-11-15 10:35:08.964052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:47.867 [2024-11-15 10:35:08.964076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:47.867 [2024-11-15 10:35:08.964268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.867 [2024-11-15 10:35:08.977732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.867 10:35:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.867 10:35:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:47.867 10:35:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60052 00:06:47.867 10:35:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60052 ']' 00:06:47.867 10:35:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60052 00:06:48.125 10:35:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:48.125 10:35:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.125 10:35:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60052 00:06:48.125 killing process with pid 60052 00:06:48.125 10:35:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.125 10:35:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.125 10:35:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60052' 00:06:48.125 10:35:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60052 00:06:48.125 [2024-11-15 10:35:09.063867] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:48.125 [2024-11-15 10:35:09.063967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:48.125 10:35:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60052 00:06:48.125 [2024-11-15 10:35:09.064031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:48.125 [2024-11-15 10:35:09.064047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:49.502 [2024-11-15 10:35:10.366833] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:50.440 10:35:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:50.440 00:06:50.440 real 0m4.586s 00:06:50.440 user 0m4.941s 00:06:50.440 sys 0m0.601s 00:06:50.440 10:35:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.440 10:35:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.440 ************************************ 00:06:50.440 END TEST raid0_resize_superblock_test 00:06:50.440 ************************************ 00:06:50.440 10:35:11 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:50.440 10:35:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.440 10:35:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.440 10:35:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.440 ************************************ 00:06:50.440 START TEST raid1_resize_superblock_test 00:06:50.440 ************************************ 00:06:50.440 10:35:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:50.440 10:35:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:50.440 10:35:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60150 00:06:50.440 10:35:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:50.440 Process raid pid: 60150 00:06:50.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.440 10:35:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60150' 00:06:50.440 10:35:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60150 00:06:50.440 10:35:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60150 ']' 00:06:50.440 10:35:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.440 10:35:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.440 10:35:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.440 10:35:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.441 10:35:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.441 [2024-11-15 10:35:11.594653] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:50.441 [2024-11-15 10:35:11.594881] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.698 [2024-11-15 10:35:11.780008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.957 [2024-11-15 10:35:11.938762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.215 [2024-11-15 10:35:12.178967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.215 [2024-11-15 10:35:12.179008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.473 10:35:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.473 10:35:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:51.473 10:35:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:51.473 10:35:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.473 10:35:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.042 malloc0 00:06:52.042 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.042 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:52.042 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.042 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.042 [2024-11-15 10:35:13.089435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:52.042 [2024-11-15 10:35:13.090381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.042 [2024-11-15 10:35:13.090471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:52.042 [2024-11-15 10:35:13.090609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.042 [2024-11-15 10:35:13.093366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.042 [2024-11-15 10:35:13.093416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:52.042 pt0 00:06:52.042 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.042 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:52.042 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.042 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.301 25052aa1-f2cf-44e6-99d6-a5e53d69450a 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 633cebe7-250d-4fa5-b14e-885df3eea7a4 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 1bfcd530-64e0-4772-a94f-f89c88ba3b4d 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 [2024-11-15 10:35:13.228059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 633cebe7-250d-4fa5-b14e-885df3eea7a4 is claimed 00:06:52.302 [2024-11-15 10:35:13.228311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1bfcd530-64e0-4772-a94f-f89c88ba3b4d is claimed 00:06:52.302 [2024-11-15 10:35:13.228552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:52.302 [2024-11-15 10:35:13.228579] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:52.302 [2024-11-15 10:35:13.228897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:52.302 [2024-11-15 10:35:13.229141] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:52.302 [2024-11-15 10:35:13.229158] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:52.302 [2024-11-15 10:35:13.229344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 [2024-11-15 10:35:13.340390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 [2024-11-15 10:35:13.388319] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.302 [2024-11-15 10:35:13.388350] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '633cebe7-250d-4fa5-b14e-885df3eea7a4' was resized: old size 131072, new size 204800 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 [2024-11-15 10:35:13.396214] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.302 [2024-11-15 10:35:13.396242] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '1bfcd530-64e0-4772-a94f-f89c88ba3b4d' was resized: old size 131072, new size 204800 00:06:52.302 [2024-11-15 10:35:13.396288] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.302 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 [2024-11-15 10:35:13.508380] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 [2024-11-15 10:35:13.560179] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:52.562 [2024-11-15 10:35:13.560284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:52.562 [2024-11-15 10:35:13.560324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:52.562 [2024-11-15 10:35:13.560562] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:52.562 [2024-11-15 10:35:13.560825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.562 [2024-11-15 10:35:13.560919] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.562 [2024-11-15 10:35:13.560941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 [2024-11-15 10:35:13.568042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:52.562 [2024-11-15 10:35:13.568231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.562 [2024-11-15 10:35:13.568302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:52.562 [2024-11-15 10:35:13.568450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.562 [2024-11-15 10:35:13.571356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.562 [2024-11-15 10:35:13.571522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:52.562 pt0 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 [2024-11-15 10:35:13.573940] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 633cebe7-250d-4fa5-b14e-885df3eea7a4 00:06:52.562 [2024-11-15 10:35:13.574028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 633cebe7-250d-4fa5-b14e-885df3eea7a4 is claimed 00:06:52.562 [2024-11-15 10:35:13.574173] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 1bfcd530-64e0-4772-a94f-f89c88ba3b4d 00:06:52.562 [2024-11-15 10:35:13.574208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1bfcd530-64e0-4772-a94f-f89c88ba3b4d is claimed 00:06:52.562 [2024-11-15 10:35:13.574358] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 1bfcd530-64e0-4772-a94f-f89c88ba3b4d (2) smaller than existing raid bdev Raid (3) 00:06:52.562 [2024-11-15 10:35:13.574390] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 633cebe7-250d-4fa5-b14e-885df3eea7a4: File exists 00:06:52.562 [2024-11-15 10:35:13.574449] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:52.562 [2024-11-15 10:35:13.574469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:52.562 [2024-11-15 10:35:13.574789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:52.562 [2024-11-15 10:35:13.575124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:52.562 [2024-11-15 10:35:13.575147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:52.562 [2024-11-15 10:35:13.575334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 [2024-11-15 10:35:13.588442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60150 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60150 ']' 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60150 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60150 00:06:52.562 killing process with pid 60150 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60150' 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60150 00:06:52.562 [2024-11-15 10:35:13.669386] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.562 10:35:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60150 00:06:52.562 [2024-11-15 10:35:13.669475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.562 [2024-11-15 10:35:13.669561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.562 [2024-11-15 10:35:13.669577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:53.938 [2024-11-15 10:35:14.950091] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.874 10:35:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:54.874 00:06:54.874 real 0m4.494s 00:06:54.874 user 0m4.770s 00:06:54.874 sys 0m0.633s 00:06:54.874 10:35:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.874 10:35:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.874 ************************************ 00:06:54.874 END TEST raid1_resize_superblock_test 00:06:54.874 ************************************ 00:06:54.874 10:35:16 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:54.874 10:35:16 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:54.874 10:35:16 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:54.874 10:35:16 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:54.874 10:35:16 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:54.874 10:35:16 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:54.874 10:35:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:54.874 10:35:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.874 10:35:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.874 ************************************ 00:06:54.874 START TEST raid_function_test_raid0 00:06:54.874 ************************************ 00:06:54.874 10:35:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:54.874 10:35:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:54.874 10:35:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:54.874 10:35:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:54.874 Process raid pid: 60253 00:06:54.874 10:35:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60253 00:06:54.874 10:35:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.874 10:35:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60253' 00:06:54.874 10:35:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60253 00:06:54.874 10:35:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60253 ']' 00:06:54.874 10:35:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.132 10:35:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.132 10:35:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.132 10:35:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.132 10:35:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:55.132 [2024-11-15 10:35:16.133484] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:55.132 [2024-11-15 10:35:16.133915] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.390 [2024-11-15 10:35:16.315133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.390 [2024-11-15 10:35:16.448104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.648 [2024-11-15 10:35:16.654160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.648 [2024-11-15 10:35:16.654419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.214 Base_1 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.214 Base_2 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.214 [2024-11-15 10:35:17.277949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:56.214 [2024-11-15 10:35:17.280316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:56.214 [2024-11-15 10:35:17.280602] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:56.214 [2024-11-15 10:35:17.280632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:56.214 [2024-11-15 10:35:17.280972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:56.214 [2024-11-15 10:35:17.281165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:56.214 [2024-11-15 10:35:17.281180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:56.214 [2024-11-15 10:35:17.281371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:56.214 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:56.472 [2024-11-15 10:35:17.562079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:56.472 /dev/nbd0 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.472 1+0 records in 00:06:56.472 1+0 records out 00:06:56.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658705 s, 6.2 MB/s 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:56.472 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:56.730 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.730 { 00:06:56.730 "nbd_device": "/dev/nbd0", 00:06:56.730 "bdev_name": "raid" 00:06:56.730 } 00:06:56.730 ]' 00:06:56.730 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.730 { 00:06:56.730 "nbd_device": "/dev/nbd0", 00:06:56.730 "bdev_name": "raid" 00:06:56.730 } 00:06:56.730 ]' 00:06:56.730 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.986 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:56.986 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:56.986 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.986 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:56.986 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:56.986 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:56.986 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:56.986 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:56.986 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:56.986 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:56.986 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:56.987 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:56.987 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:56.987 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:56.987 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:56.987 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:56.987 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:56.987 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:56.987 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:56.987 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:56.987 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:56.987 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:56.987 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:56.987 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:56.987 4096+0 records in 00:06:56.987 4096+0 records out 00:06:56.987 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0244633 s, 85.7 MB/s 00:06:56.987 10:35:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:57.244 4096+0 records in 00:06:57.245 4096+0 records out 00:06:57.245 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.288689 s, 7.3 MB/s 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:57.245 128+0 records in 00:06:57.245 128+0 records out 00:06:57.245 65536 bytes (66 kB, 64 KiB) copied, 0.0015353 s, 42.7 MB/s 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:57.245 2035+0 records in 00:06:57.245 2035+0 records out 00:06:57.245 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00816278 s, 128 MB/s 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:57.245 456+0 records in 00:06:57.245 456+0 records out 00:06:57.245 233472 bytes (233 kB, 228 KiB) copied, 0.00341113 s, 68.4 MB/s 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.245 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:57.811 [2024-11-15 10:35:18.707162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.811 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.811 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.811 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.811 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.811 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.811 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.811 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:57.811 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.811 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:57.811 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:57.811 10:35:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60253 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60253 ']' 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60253 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60253 00:06:58.070 killing process with pid 60253 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60253' 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60253 00:06:58.070 [2024-11-15 10:35:19.136805] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.070 10:35:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60253 00:06:58.070 [2024-11-15 10:35:19.136926] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.070 [2024-11-15 10:35:19.136991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.070 [2024-11-15 10:35:19.137016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:58.327 [2024-11-15 10:35:19.324589] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.259 ************************************ 00:06:59.259 END TEST raid_function_test_raid0 00:06:59.259 ************************************ 00:06:59.259 10:35:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:59.259 00:06:59.259 real 0m4.332s 00:06:59.259 user 0m5.373s 00:06:59.259 sys 0m0.984s 00:06:59.259 10:35:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.259 10:35:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:59.259 10:35:20 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:59.259 10:35:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.259 10:35:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.259 10:35:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.259 ************************************ 00:06:59.259 START TEST raid_function_test_concat 00:06:59.259 ************************************ 00:06:59.259 Process raid pid: 60382 00:06:59.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.259 10:35:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:59.259 10:35:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:59.259 10:35:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:59.259 10:35:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:59.259 10:35:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60382 00:06:59.259 10:35:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60382' 00:06:59.259 10:35:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60382 00:06:59.259 10:35:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:59.259 10:35:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60382 ']' 00:06:59.259 10:35:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.259 10:35:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.260 10:35:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.260 10:35:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.260 10:35:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:59.518 [2024-11-15 10:35:20.517749] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:06:59.518 [2024-11-15 10:35:20.518178] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.778 [2024-11-15 10:35:20.708184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.778 [2024-11-15 10:35:20.868300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.036 [2024-11-15 10:35:21.118739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.036 [2024-11-15 10:35:21.119018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.294 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.295 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:00.295 10:35:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:00.295 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.295 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.553 Base_1 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.553 Base_2 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.553 [2024-11-15 10:35:21.543261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:00.553 [2024-11-15 10:35:21.545663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:00.553 [2024-11-15 10:35:21.545761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:00.553 [2024-11-15 10:35:21.545790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:00.553 [2024-11-15 10:35:21.546116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:00.553 [2024-11-15 10:35:21.546306] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:00.553 [2024-11-15 10:35:21.546322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:00.553 [2024-11-15 10:35:21.546535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:00.553 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:00.812 [2024-11-15 10:35:21.891418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:00.812 /dev/nbd0 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.812 1+0 records in 00:07:00.812 1+0 records out 00:07:00.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336905 s, 12.2 MB/s 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:00.812 10:35:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.379 { 00:07:01.379 "nbd_device": "/dev/nbd0", 00:07:01.379 "bdev_name": "raid" 00:07:01.379 } 00:07:01.379 ]' 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.379 { 00:07:01.379 "nbd_device": "/dev/nbd0", 00:07:01.379 "bdev_name": "raid" 00:07:01.379 } 00:07:01.379 ]' 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:01.379 4096+0 records in 00:07:01.379 4096+0 records out 00:07:01.379 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0234824 s, 89.3 MB/s 00:07:01.379 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:01.637 4096+0 records in 00:07:01.637 4096+0 records out 00:07:01.637 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.374212 s, 5.6 MB/s 00:07:01.637 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:01.637 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.895 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:01.895 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.895 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:01.895 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:01.895 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:01.895 128+0 records in 00:07:01.896 128+0 records out 00:07:01.896 65536 bytes (66 kB, 64 KiB) copied, 0.000618202 s, 106 MB/s 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:01.896 2035+0 records in 00:07:01.896 2035+0 records out 00:07:01.896 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0100943 s, 103 MB/s 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:01.896 456+0 records in 00:07:01.896 456+0 records out 00:07:01.896 233472 bytes (233 kB, 228 KiB) copied, 0.00264765 s, 88.2 MB/s 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.896 10:35:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:02.154 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.154 [2024-11-15 10:35:23.290480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.154 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.154 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.154 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.154 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.154 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.154 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:02.154 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.154 10:35:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:02.154 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:02.154 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60382 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60382 ']' 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60382 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60382 00:07:02.720 killing process with pid 60382 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60382' 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60382 00:07:02.720 [2024-11-15 10:35:23.815795] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.720 10:35:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60382 00:07:02.720 [2024-11-15 10:35:23.815910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.720 [2024-11-15 10:35:23.815980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.720 [2024-11-15 10:35:23.815999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:02.978 [2024-11-15 10:35:24.015778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.913 10:35:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:03.913 00:07:03.913 real 0m4.623s 00:07:03.913 user 0m5.777s 00:07:03.913 sys 0m1.132s 00:07:03.913 10:35:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.913 ************************************ 00:07:03.913 END TEST raid_function_test_concat 00:07:03.913 ************************************ 00:07:03.913 10:35:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:04.171 10:35:25 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:04.171 10:35:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.171 10:35:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.171 10:35:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.171 ************************************ 00:07:04.171 START TEST raid0_resize_test 00:07:04.171 ************************************ 00:07:04.171 10:35:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:04.171 10:35:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:04.171 10:35:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:04.171 10:35:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:04.171 10:35:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:04.171 10:35:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:04.171 10:35:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:04.171 10:35:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:04.171 Process raid pid: 60522 00:07:04.171 10:35:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:04.172 10:35:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60522 00:07:04.172 10:35:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60522' 00:07:04.172 10:35:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:04.172 10:35:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60522 00:07:04.172 10:35:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60522 ']' 00:07:04.172 10:35:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.172 10:35:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.172 10:35:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.172 10:35:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.172 10:35:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.172 [2024-11-15 10:35:25.174827] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:07:04.172 [2024-11-15 10:35:25.175242] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.430 [2024-11-15 10:35:25.351687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.430 [2024-11-15 10:35:25.483543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.688 [2024-11-15 10:35:25.688549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.688 [2024-11-15 10:35:25.688799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.256 Base_1 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.256 Base_2 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.256 [2024-11-15 10:35:26.159459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:05.256 [2024-11-15 10:35:26.161826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:05.256 [2024-11-15 10:35:26.162035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:05.256 [2024-11-15 10:35:26.162066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:05.256 [2024-11-15 10:35:26.162364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:05.256 [2024-11-15 10:35:26.162543] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:05.256 [2024-11-15 10:35:26.162560] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:05.256 [2024-11-15 10:35:26.162731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.256 [2024-11-15 10:35:26.167451] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:05.256 [2024-11-15 10:35:26.167486] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:05.256 true 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.256 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.257 [2024-11-15 10:35:26.179674] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.257 [2024-11-15 10:35:26.231475] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:05.257 [2024-11-15 10:35:26.231520] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:05.257 [2024-11-15 10:35:26.231565] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:05.257 true 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.257 [2024-11-15 10:35:26.243691] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60522 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60522 ']' 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60522 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60522 00:07:05.257 killing process with pid 60522 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60522' 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60522 00:07:05.257 10:35:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60522 00:07:05.257 [2024-11-15 10:35:26.315285] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.257 [2024-11-15 10:35:26.315413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.257 [2024-11-15 10:35:26.315483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.257 [2024-11-15 10:35:26.315523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:05.257 [2024-11-15 10:35:26.330758] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.642 ************************************ 00:07:06.642 END TEST raid0_resize_test 00:07:06.642 ************************************ 00:07:06.642 10:35:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:06.642 00:07:06.642 real 0m2.289s 00:07:06.642 user 0m2.489s 00:07:06.642 sys 0m0.395s 00:07:06.642 10:35:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.642 10:35:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.642 10:35:27 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:06.642 10:35:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.642 10:35:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.642 10:35:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.642 ************************************ 00:07:06.642 START TEST raid1_resize_test 00:07:06.642 ************************************ 00:07:06.642 Process raid pid: 60578 00:07:06.642 10:35:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:06.642 10:35:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:06.642 10:35:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:06.642 10:35:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:06.642 10:35:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:06.642 10:35:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:06.642 10:35:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:06.642 10:35:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:06.642 10:35:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:06.643 10:35:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60578 00:07:06.643 10:35:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60578' 00:07:06.643 10:35:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60578 00:07:06.643 10:35:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.643 10:35:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60578 ']' 00:07:06.643 10:35:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.643 10:35:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.643 10:35:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.643 10:35:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.643 10:35:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.643 [2024-11-15 10:35:27.532906] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:07:06.643 [2024-11-15 10:35:27.533348] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.643 [2024-11-15 10:35:27.720665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.918 [2024-11-15 10:35:27.858132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.918 [2024-11-15 10:35:28.066968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.918 [2024-11-15 10:35:28.067191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.498 Base_1 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.498 Base_2 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.498 [2024-11-15 10:35:28.630068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:07.498 [2024-11-15 10:35:28.632632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:07.498 [2024-11-15 10:35:28.632844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:07.498 [2024-11-15 10:35:28.632884] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:07.498 [2024-11-15 10:35:28.633207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:07.498 [2024-11-15 10:35:28.633376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:07.498 [2024-11-15 10:35:28.633394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:07.498 [2024-11-15 10:35:28.633595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.498 [2024-11-15 10:35:28.638053] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.498 [2024-11-15 10:35:28.638093] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:07.498 true 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.498 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.498 [2024-11-15 10:35:28.650257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.757 [2024-11-15 10:35:28.698035] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.757 [2024-11-15 10:35:28.698063] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:07.757 [2024-11-15 10:35:28.698103] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:07.757 true 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.757 [2024-11-15 10:35:28.710262] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60578 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60578 ']' 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60578 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60578 00:07:07.757 killing process with pid 60578 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60578' 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60578 00:07:07.757 [2024-11-15 10:35:28.790999] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.757 10:35:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60578 00:07:07.757 [2024-11-15 10:35:28.791236] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.758 [2024-11-15 10:35:28.791868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.758 [2024-11-15 10:35:28.792018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:07.758 [2024-11-15 10:35:28.807050] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.690 10:35:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:08.690 00:07:08.690 real 0m2.389s 00:07:08.690 user 0m2.723s 00:07:08.690 sys 0m0.387s 00:07:08.690 10:35:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.690 10:35:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.690 ************************************ 00:07:08.690 END TEST raid1_resize_test 00:07:08.690 ************************************ 00:07:08.948 10:35:29 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:08.948 10:35:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:08.948 10:35:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:08.948 10:35:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:08.948 10:35:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.948 10:35:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.948 ************************************ 00:07:08.948 START TEST raid_state_function_test 00:07:08.948 ************************************ 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:08.948 Process raid pid: 60635 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60635 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60635' 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60635 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60635 ']' 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.948 10:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.948 [2024-11-15 10:35:29.982254] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:07:08.948 [2024-11-15 10:35:29.982437] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.205 [2024-11-15 10:35:30.171748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.205 [2024-11-15 10:35:30.330426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.462 [2024-11-15 10:35:30.554030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.462 [2024-11-15 10:35:30.554310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.028 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.028 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:10.028 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:10.028 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.028 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.028 [2024-11-15 10:35:30.976542] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:10.028 [2024-11-15 10:35:30.976608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:10.028 [2024-11-15 10:35:30.976627] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:10.028 [2024-11-15 10:35:30.976643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:10.028 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.028 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:10.028 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.028 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.028 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.028 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.028 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.028 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.028 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.029 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.029 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.029 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.029 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.029 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.029 10:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.029 10:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.029 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.029 "name": "Existed_Raid", 00:07:10.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.029 "strip_size_kb": 64, 00:07:10.029 "state": "configuring", 00:07:10.029 "raid_level": "raid0", 00:07:10.029 "superblock": false, 00:07:10.029 "num_base_bdevs": 2, 00:07:10.029 "num_base_bdevs_discovered": 0, 00:07:10.029 "num_base_bdevs_operational": 2, 00:07:10.029 "base_bdevs_list": [ 00:07:10.029 { 00:07:10.029 "name": "BaseBdev1", 00:07:10.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.029 "is_configured": false, 00:07:10.029 "data_offset": 0, 00:07:10.029 "data_size": 0 00:07:10.029 }, 00:07:10.029 { 00:07:10.029 "name": "BaseBdev2", 00:07:10.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.029 "is_configured": false, 00:07:10.029 "data_offset": 0, 00:07:10.029 "data_size": 0 00:07:10.029 } 00:07:10.029 ] 00:07:10.029 }' 00:07:10.029 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.029 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.602 [2024-11-15 10:35:31.476620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:10.602 [2024-11-15 10:35:31.476665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.602 [2024-11-15 10:35:31.484584] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:10.602 [2024-11-15 10:35:31.484979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:10.602 [2024-11-15 10:35:31.485008] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:10.602 [2024-11-15 10:35:31.485030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.602 [2024-11-15 10:35:31.534290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:10.602 BaseBdev1 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.602 [ 00:07:10.602 { 00:07:10.602 "name": "BaseBdev1", 00:07:10.602 "aliases": [ 00:07:10.602 "aae086ec-e537-4869-a713-e2a3939a9f9b" 00:07:10.602 ], 00:07:10.602 "product_name": "Malloc disk", 00:07:10.602 "block_size": 512, 00:07:10.602 "num_blocks": 65536, 00:07:10.602 "uuid": "aae086ec-e537-4869-a713-e2a3939a9f9b", 00:07:10.602 "assigned_rate_limits": { 00:07:10.602 "rw_ios_per_sec": 0, 00:07:10.602 "rw_mbytes_per_sec": 0, 00:07:10.602 "r_mbytes_per_sec": 0, 00:07:10.602 "w_mbytes_per_sec": 0 00:07:10.602 }, 00:07:10.602 "claimed": true, 00:07:10.602 "claim_type": "exclusive_write", 00:07:10.602 "zoned": false, 00:07:10.602 "supported_io_types": { 00:07:10.602 "read": true, 00:07:10.602 "write": true, 00:07:10.602 "unmap": true, 00:07:10.602 "flush": true, 00:07:10.602 "reset": true, 00:07:10.602 "nvme_admin": false, 00:07:10.602 "nvme_io": false, 00:07:10.602 "nvme_io_md": false, 00:07:10.602 "write_zeroes": true, 00:07:10.602 "zcopy": true, 00:07:10.602 "get_zone_info": false, 00:07:10.602 "zone_management": false, 00:07:10.602 "zone_append": false, 00:07:10.602 "compare": false, 00:07:10.602 "compare_and_write": false, 00:07:10.602 "abort": true, 00:07:10.602 "seek_hole": false, 00:07:10.602 "seek_data": false, 00:07:10.602 "copy": true, 00:07:10.602 "nvme_iov_md": false 00:07:10.602 }, 00:07:10.602 "memory_domains": [ 00:07:10.602 { 00:07:10.602 "dma_device_id": "system", 00:07:10.602 "dma_device_type": 1 00:07:10.602 }, 00:07:10.602 { 00:07:10.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.602 "dma_device_type": 2 00:07:10.602 } 00:07:10.602 ], 00:07:10.602 "driver_specific": {} 00:07:10.602 } 00:07:10.602 ] 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.602 "name": "Existed_Raid", 00:07:10.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.602 "strip_size_kb": 64, 00:07:10.602 "state": "configuring", 00:07:10.602 "raid_level": "raid0", 00:07:10.602 "superblock": false, 00:07:10.602 "num_base_bdevs": 2, 00:07:10.602 "num_base_bdevs_discovered": 1, 00:07:10.602 "num_base_bdevs_operational": 2, 00:07:10.602 "base_bdevs_list": [ 00:07:10.602 { 00:07:10.602 "name": "BaseBdev1", 00:07:10.602 "uuid": "aae086ec-e537-4869-a713-e2a3939a9f9b", 00:07:10.602 "is_configured": true, 00:07:10.602 "data_offset": 0, 00:07:10.602 "data_size": 65536 00:07:10.602 }, 00:07:10.602 { 00:07:10.602 "name": "BaseBdev2", 00:07:10.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.602 "is_configured": false, 00:07:10.602 "data_offset": 0, 00:07:10.602 "data_size": 0 00:07:10.602 } 00:07:10.602 ] 00:07:10.602 }' 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.602 10:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.169 [2024-11-15 10:35:32.070476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:11.169 [2024-11-15 10:35:32.070555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.169 [2024-11-15 10:35:32.078536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:11.169 [2024-11-15 10:35:32.081114] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.169 [2024-11-15 10:35:32.081171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.169 "name": "Existed_Raid", 00:07:11.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.169 "strip_size_kb": 64, 00:07:11.169 "state": "configuring", 00:07:11.169 "raid_level": "raid0", 00:07:11.169 "superblock": false, 00:07:11.169 "num_base_bdevs": 2, 00:07:11.169 "num_base_bdevs_discovered": 1, 00:07:11.169 "num_base_bdevs_operational": 2, 00:07:11.169 "base_bdevs_list": [ 00:07:11.169 { 00:07:11.169 "name": "BaseBdev1", 00:07:11.169 "uuid": "aae086ec-e537-4869-a713-e2a3939a9f9b", 00:07:11.169 "is_configured": true, 00:07:11.169 "data_offset": 0, 00:07:11.169 "data_size": 65536 00:07:11.169 }, 00:07:11.169 { 00:07:11.169 "name": "BaseBdev2", 00:07:11.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.169 "is_configured": false, 00:07:11.169 "data_offset": 0, 00:07:11.169 "data_size": 0 00:07:11.169 } 00:07:11.169 ] 00:07:11.169 }' 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.169 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.737 [2024-11-15 10:35:32.633185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:11.737 [2024-11-15 10:35:32.633456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:11.737 [2024-11-15 10:35:32.633548] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:11.737 [2024-11-15 10:35:32.634013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:11.737 [2024-11-15 10:35:32.634235] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:11.737 [2024-11-15 10:35:32.634259] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:11.737 [2024-11-15 10:35:32.634606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.737 BaseBdev2 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.737 [ 00:07:11.737 { 00:07:11.737 "name": "BaseBdev2", 00:07:11.737 "aliases": [ 00:07:11.737 "a173bf6c-426d-469e-976b-f29becc6a526" 00:07:11.737 ], 00:07:11.737 "product_name": "Malloc disk", 00:07:11.737 "block_size": 512, 00:07:11.737 "num_blocks": 65536, 00:07:11.737 "uuid": "a173bf6c-426d-469e-976b-f29becc6a526", 00:07:11.737 "assigned_rate_limits": { 00:07:11.737 "rw_ios_per_sec": 0, 00:07:11.737 "rw_mbytes_per_sec": 0, 00:07:11.737 "r_mbytes_per_sec": 0, 00:07:11.737 "w_mbytes_per_sec": 0 00:07:11.737 }, 00:07:11.737 "claimed": true, 00:07:11.737 "claim_type": "exclusive_write", 00:07:11.737 "zoned": false, 00:07:11.737 "supported_io_types": { 00:07:11.737 "read": true, 00:07:11.737 "write": true, 00:07:11.737 "unmap": true, 00:07:11.737 "flush": true, 00:07:11.737 "reset": true, 00:07:11.737 "nvme_admin": false, 00:07:11.737 "nvme_io": false, 00:07:11.737 "nvme_io_md": false, 00:07:11.737 "write_zeroes": true, 00:07:11.737 "zcopy": true, 00:07:11.737 "get_zone_info": false, 00:07:11.737 "zone_management": false, 00:07:11.737 "zone_append": false, 00:07:11.737 "compare": false, 00:07:11.737 "compare_and_write": false, 00:07:11.737 "abort": true, 00:07:11.737 "seek_hole": false, 00:07:11.737 "seek_data": false, 00:07:11.737 "copy": true, 00:07:11.737 "nvme_iov_md": false 00:07:11.737 }, 00:07:11.737 "memory_domains": [ 00:07:11.737 { 00:07:11.737 "dma_device_id": "system", 00:07:11.737 "dma_device_type": 1 00:07:11.737 }, 00:07:11.737 { 00:07:11.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.737 "dma_device_type": 2 00:07:11.737 } 00:07:11.737 ], 00:07:11.737 "driver_specific": {} 00:07:11.737 } 00:07:11.737 ] 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.737 "name": "Existed_Raid", 00:07:11.737 "uuid": "a9c1869b-7f95-4d3b-b59b-62c12a2f87fa", 00:07:11.737 "strip_size_kb": 64, 00:07:11.737 "state": "online", 00:07:11.737 "raid_level": "raid0", 00:07:11.737 "superblock": false, 00:07:11.737 "num_base_bdevs": 2, 00:07:11.737 "num_base_bdevs_discovered": 2, 00:07:11.737 "num_base_bdevs_operational": 2, 00:07:11.737 "base_bdevs_list": [ 00:07:11.737 { 00:07:11.737 "name": "BaseBdev1", 00:07:11.737 "uuid": "aae086ec-e537-4869-a713-e2a3939a9f9b", 00:07:11.737 "is_configured": true, 00:07:11.737 "data_offset": 0, 00:07:11.737 "data_size": 65536 00:07:11.737 }, 00:07:11.737 { 00:07:11.737 "name": "BaseBdev2", 00:07:11.737 "uuid": "a173bf6c-426d-469e-976b-f29becc6a526", 00:07:11.737 "is_configured": true, 00:07:11.737 "data_offset": 0, 00:07:11.737 "data_size": 65536 00:07:11.737 } 00:07:11.737 ] 00:07:11.737 }' 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.737 10:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.996 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:11.996 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:11.996 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:11.996 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:11.996 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:11.996 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:11.996 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:11.996 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:11.996 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.996 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.255 [2024-11-15 10:35:33.157738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.255 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.255 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:12.255 "name": "Existed_Raid", 00:07:12.255 "aliases": [ 00:07:12.255 "a9c1869b-7f95-4d3b-b59b-62c12a2f87fa" 00:07:12.255 ], 00:07:12.255 "product_name": "Raid Volume", 00:07:12.255 "block_size": 512, 00:07:12.255 "num_blocks": 131072, 00:07:12.255 "uuid": "a9c1869b-7f95-4d3b-b59b-62c12a2f87fa", 00:07:12.255 "assigned_rate_limits": { 00:07:12.255 "rw_ios_per_sec": 0, 00:07:12.255 "rw_mbytes_per_sec": 0, 00:07:12.255 "r_mbytes_per_sec": 0, 00:07:12.255 "w_mbytes_per_sec": 0 00:07:12.255 }, 00:07:12.255 "claimed": false, 00:07:12.255 "zoned": false, 00:07:12.255 "supported_io_types": { 00:07:12.255 "read": true, 00:07:12.255 "write": true, 00:07:12.255 "unmap": true, 00:07:12.255 "flush": true, 00:07:12.255 "reset": true, 00:07:12.255 "nvme_admin": false, 00:07:12.255 "nvme_io": false, 00:07:12.255 "nvme_io_md": false, 00:07:12.255 "write_zeroes": true, 00:07:12.255 "zcopy": false, 00:07:12.255 "get_zone_info": false, 00:07:12.255 "zone_management": false, 00:07:12.255 "zone_append": false, 00:07:12.255 "compare": false, 00:07:12.255 "compare_and_write": false, 00:07:12.255 "abort": false, 00:07:12.255 "seek_hole": false, 00:07:12.255 "seek_data": false, 00:07:12.255 "copy": false, 00:07:12.255 "nvme_iov_md": false 00:07:12.255 }, 00:07:12.255 "memory_domains": [ 00:07:12.255 { 00:07:12.255 "dma_device_id": "system", 00:07:12.255 "dma_device_type": 1 00:07:12.255 }, 00:07:12.255 { 00:07:12.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.255 "dma_device_type": 2 00:07:12.255 }, 00:07:12.255 { 00:07:12.255 "dma_device_id": "system", 00:07:12.255 "dma_device_type": 1 00:07:12.255 }, 00:07:12.255 { 00:07:12.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.255 "dma_device_type": 2 00:07:12.255 } 00:07:12.255 ], 00:07:12.255 "driver_specific": { 00:07:12.255 "raid": { 00:07:12.255 "uuid": "a9c1869b-7f95-4d3b-b59b-62c12a2f87fa", 00:07:12.255 "strip_size_kb": 64, 00:07:12.255 "state": "online", 00:07:12.255 "raid_level": "raid0", 00:07:12.255 "superblock": false, 00:07:12.255 "num_base_bdevs": 2, 00:07:12.255 "num_base_bdevs_discovered": 2, 00:07:12.255 "num_base_bdevs_operational": 2, 00:07:12.255 "base_bdevs_list": [ 00:07:12.255 { 00:07:12.255 "name": "BaseBdev1", 00:07:12.255 "uuid": "aae086ec-e537-4869-a713-e2a3939a9f9b", 00:07:12.255 "is_configured": true, 00:07:12.255 "data_offset": 0, 00:07:12.255 "data_size": 65536 00:07:12.255 }, 00:07:12.255 { 00:07:12.255 "name": "BaseBdev2", 00:07:12.255 "uuid": "a173bf6c-426d-469e-976b-f29becc6a526", 00:07:12.255 "is_configured": true, 00:07:12.255 "data_offset": 0, 00:07:12.255 "data_size": 65536 00:07:12.255 } 00:07:12.255 ] 00:07:12.255 } 00:07:12.255 } 00:07:12.255 }' 00:07:12.255 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:12.255 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:12.255 BaseBdev2' 00:07:12.255 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.255 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:12.255 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.255 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:12.255 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.255 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.255 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.256 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.256 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.256 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.256 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.256 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.256 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:12.256 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.256 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.256 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.256 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.256 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.256 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:12.256 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.256 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.256 [2024-11-15 10:35:33.409463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:12.256 [2024-11-15 10:35:33.409638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.256 [2024-11-15 10:35:33.409728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.514 "name": "Existed_Raid", 00:07:12.514 "uuid": "a9c1869b-7f95-4d3b-b59b-62c12a2f87fa", 00:07:12.514 "strip_size_kb": 64, 00:07:12.514 "state": "offline", 00:07:12.514 "raid_level": "raid0", 00:07:12.514 "superblock": false, 00:07:12.514 "num_base_bdevs": 2, 00:07:12.514 "num_base_bdevs_discovered": 1, 00:07:12.514 "num_base_bdevs_operational": 1, 00:07:12.514 "base_bdevs_list": [ 00:07:12.514 { 00:07:12.514 "name": null, 00:07:12.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.514 "is_configured": false, 00:07:12.514 "data_offset": 0, 00:07:12.514 "data_size": 65536 00:07:12.514 }, 00:07:12.514 { 00:07:12.514 "name": "BaseBdev2", 00:07:12.514 "uuid": "a173bf6c-426d-469e-976b-f29becc6a526", 00:07:12.514 "is_configured": true, 00:07:12.514 "data_offset": 0, 00:07:12.514 "data_size": 65536 00:07:12.514 } 00:07:12.514 ] 00:07:12.514 }' 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.514 10:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.081 [2024-11-15 10:35:34.075859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:13.081 [2024-11-15 10:35:34.075926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60635 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60635 ']' 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60635 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.081 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60635 00:07:13.337 killing process with pid 60635 00:07:13.337 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.337 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.337 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60635' 00:07:13.337 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60635 00:07:13.337 [2024-11-15 10:35:34.248765] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.337 10:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60635 00:07:13.337 [2024-11-15 10:35:34.263466] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:14.308 00:07:14.308 real 0m5.419s 00:07:14.308 user 0m8.187s 00:07:14.308 sys 0m0.717s 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.308 ************************************ 00:07:14.308 END TEST raid_state_function_test 00:07:14.308 ************************************ 00:07:14.308 10:35:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:14.308 10:35:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:14.308 10:35:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.308 10:35:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.308 ************************************ 00:07:14.308 START TEST raid_state_function_test_sb 00:07:14.308 ************************************ 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60898 00:07:14.308 Process raid pid: 60898 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60898' 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60898 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60898 ']' 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.308 10:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.308 [2024-11-15 10:35:35.440676] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:07:14.308 [2024-11-15 10:35:35.441415] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.567 [2024-11-15 10:35:35.618781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.825 [2024-11-15 10:35:35.749424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.825 [2024-11-15 10:35:35.957775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.825 [2024-11-15 10:35:35.957828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.392 [2024-11-15 10:35:36.457395] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:15.392 [2024-11-15 10:35:36.457457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:15.392 [2024-11-15 10:35:36.457475] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.392 [2024-11-15 10:35:36.457506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.392 "name": "Existed_Raid", 00:07:15.392 "uuid": "0976b858-60b5-4690-aa14-c201a64c5d38", 00:07:15.392 "strip_size_kb": 64, 00:07:15.392 "state": "configuring", 00:07:15.392 "raid_level": "raid0", 00:07:15.392 "superblock": true, 00:07:15.392 "num_base_bdevs": 2, 00:07:15.392 "num_base_bdevs_discovered": 0, 00:07:15.392 "num_base_bdevs_operational": 2, 00:07:15.392 "base_bdevs_list": [ 00:07:15.392 { 00:07:15.392 "name": "BaseBdev1", 00:07:15.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.392 "is_configured": false, 00:07:15.392 "data_offset": 0, 00:07:15.392 "data_size": 0 00:07:15.392 }, 00:07:15.392 { 00:07:15.392 "name": "BaseBdev2", 00:07:15.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.392 "is_configured": false, 00:07:15.392 "data_offset": 0, 00:07:15.392 "data_size": 0 00:07:15.392 } 00:07:15.392 ] 00:07:15.392 }' 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.392 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.960 [2024-11-15 10:35:36.945468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.960 [2024-11-15 10:35:36.945543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.960 [2024-11-15 10:35:36.953444] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:15.960 [2024-11-15 10:35:36.953508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:15.960 [2024-11-15 10:35:36.953526] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.960 [2024-11-15 10:35:36.953546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.960 [2024-11-15 10:35:36.998568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.960 BaseBdev1 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.960 10:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.960 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:15.960 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.961 [ 00:07:15.961 { 00:07:15.961 "name": "BaseBdev1", 00:07:15.961 "aliases": [ 00:07:15.961 "9c0d8670-8015-4469-a56b-cb4895202754" 00:07:15.961 ], 00:07:15.961 "product_name": "Malloc disk", 00:07:15.961 "block_size": 512, 00:07:15.961 "num_blocks": 65536, 00:07:15.961 "uuid": "9c0d8670-8015-4469-a56b-cb4895202754", 00:07:15.961 "assigned_rate_limits": { 00:07:15.961 "rw_ios_per_sec": 0, 00:07:15.961 "rw_mbytes_per_sec": 0, 00:07:15.961 "r_mbytes_per_sec": 0, 00:07:15.961 "w_mbytes_per_sec": 0 00:07:15.961 }, 00:07:15.961 "claimed": true, 00:07:15.961 "claim_type": "exclusive_write", 00:07:15.961 "zoned": false, 00:07:15.961 "supported_io_types": { 00:07:15.961 "read": true, 00:07:15.961 "write": true, 00:07:15.961 "unmap": true, 00:07:15.961 "flush": true, 00:07:15.961 "reset": true, 00:07:15.961 "nvme_admin": false, 00:07:15.961 "nvme_io": false, 00:07:15.961 "nvme_io_md": false, 00:07:15.961 "write_zeroes": true, 00:07:15.961 "zcopy": true, 00:07:15.961 "get_zone_info": false, 00:07:15.961 "zone_management": false, 00:07:15.961 "zone_append": false, 00:07:15.961 "compare": false, 00:07:15.961 "compare_and_write": false, 00:07:15.961 "abort": true, 00:07:15.961 "seek_hole": false, 00:07:15.961 "seek_data": false, 00:07:15.961 "copy": true, 00:07:15.961 "nvme_iov_md": false 00:07:15.961 }, 00:07:15.961 "memory_domains": [ 00:07:15.961 { 00:07:15.961 "dma_device_id": "system", 00:07:15.961 "dma_device_type": 1 00:07:15.961 }, 00:07:15.961 { 00:07:15.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.961 "dma_device_type": 2 00:07:15.961 } 00:07:15.961 ], 00:07:15.961 "driver_specific": {} 00:07:15.961 } 00:07:15.961 ] 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.961 "name": "Existed_Raid", 00:07:15.961 "uuid": "7fe35d26-16ed-4a3c-85e3-2ab2eb7691f0", 00:07:15.961 "strip_size_kb": 64, 00:07:15.961 "state": "configuring", 00:07:15.961 "raid_level": "raid0", 00:07:15.961 "superblock": true, 00:07:15.961 "num_base_bdevs": 2, 00:07:15.961 "num_base_bdevs_discovered": 1, 00:07:15.961 "num_base_bdevs_operational": 2, 00:07:15.961 "base_bdevs_list": [ 00:07:15.961 { 00:07:15.961 "name": "BaseBdev1", 00:07:15.961 "uuid": "9c0d8670-8015-4469-a56b-cb4895202754", 00:07:15.961 "is_configured": true, 00:07:15.961 "data_offset": 2048, 00:07:15.961 "data_size": 63488 00:07:15.961 }, 00:07:15.961 { 00:07:15.961 "name": "BaseBdev2", 00:07:15.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.961 "is_configured": false, 00:07:15.961 "data_offset": 0, 00:07:15.961 "data_size": 0 00:07:15.961 } 00:07:15.961 ] 00:07:15.961 }' 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.961 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.526 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:16.526 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.526 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.526 [2024-11-15 10:35:37.510737] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:16.526 [2024-11-15 10:35:37.510805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:16.526 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.526 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:16.526 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.526 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.526 [2024-11-15 10:35:37.518785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:16.526 [2024-11-15 10:35:37.521182] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:16.526 [2024-11-15 10:35:37.521238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:16.526 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.526 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:16.526 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:16.526 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:16.526 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.527 "name": "Existed_Raid", 00:07:16.527 "uuid": "8b179bfa-21cd-4305-8cb5-5b67ed7d586d", 00:07:16.527 "strip_size_kb": 64, 00:07:16.527 "state": "configuring", 00:07:16.527 "raid_level": "raid0", 00:07:16.527 "superblock": true, 00:07:16.527 "num_base_bdevs": 2, 00:07:16.527 "num_base_bdevs_discovered": 1, 00:07:16.527 "num_base_bdevs_operational": 2, 00:07:16.527 "base_bdevs_list": [ 00:07:16.527 { 00:07:16.527 "name": "BaseBdev1", 00:07:16.527 "uuid": "9c0d8670-8015-4469-a56b-cb4895202754", 00:07:16.527 "is_configured": true, 00:07:16.527 "data_offset": 2048, 00:07:16.527 "data_size": 63488 00:07:16.527 }, 00:07:16.527 { 00:07:16.527 "name": "BaseBdev2", 00:07:16.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.527 "is_configured": false, 00:07:16.527 "data_offset": 0, 00:07:16.527 "data_size": 0 00:07:16.527 } 00:07:16.527 ] 00:07:16.527 }' 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.527 10:35:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.096 [2024-11-15 10:35:38.077582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:17.096 [2024-11-15 10:35:38.077896] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:17.096 [2024-11-15 10:35:38.077917] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:17.096 [2024-11-15 10:35:38.078244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:17.096 BaseBdev2 00:07:17.096 [2024-11-15 10:35:38.078462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:17.096 [2024-11-15 10:35:38.078511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:17.096 [2024-11-15 10:35:38.078689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.096 [ 00:07:17.096 { 00:07:17.096 "name": "BaseBdev2", 00:07:17.096 "aliases": [ 00:07:17.096 "7e542ce6-08e1-476a-98b8-57d69f316563" 00:07:17.096 ], 00:07:17.096 "product_name": "Malloc disk", 00:07:17.096 "block_size": 512, 00:07:17.096 "num_blocks": 65536, 00:07:17.096 "uuid": "7e542ce6-08e1-476a-98b8-57d69f316563", 00:07:17.096 "assigned_rate_limits": { 00:07:17.096 "rw_ios_per_sec": 0, 00:07:17.096 "rw_mbytes_per_sec": 0, 00:07:17.096 "r_mbytes_per_sec": 0, 00:07:17.096 "w_mbytes_per_sec": 0 00:07:17.096 }, 00:07:17.096 "claimed": true, 00:07:17.096 "claim_type": "exclusive_write", 00:07:17.096 "zoned": false, 00:07:17.096 "supported_io_types": { 00:07:17.096 "read": true, 00:07:17.096 "write": true, 00:07:17.096 "unmap": true, 00:07:17.096 "flush": true, 00:07:17.096 "reset": true, 00:07:17.096 "nvme_admin": false, 00:07:17.096 "nvme_io": false, 00:07:17.096 "nvme_io_md": false, 00:07:17.096 "write_zeroes": true, 00:07:17.096 "zcopy": true, 00:07:17.096 "get_zone_info": false, 00:07:17.096 "zone_management": false, 00:07:17.096 "zone_append": false, 00:07:17.096 "compare": false, 00:07:17.096 "compare_and_write": false, 00:07:17.096 "abort": true, 00:07:17.096 "seek_hole": false, 00:07:17.096 "seek_data": false, 00:07:17.096 "copy": true, 00:07:17.096 "nvme_iov_md": false 00:07:17.096 }, 00:07:17.096 "memory_domains": [ 00:07:17.096 { 00:07:17.096 "dma_device_id": "system", 00:07:17.096 "dma_device_type": 1 00:07:17.096 }, 00:07:17.096 { 00:07:17.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.096 "dma_device_type": 2 00:07:17.096 } 00:07:17.096 ], 00:07:17.096 "driver_specific": {} 00:07:17.096 } 00:07:17.096 ] 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.096 "name": "Existed_Raid", 00:07:17.096 "uuid": "8b179bfa-21cd-4305-8cb5-5b67ed7d586d", 00:07:17.096 "strip_size_kb": 64, 00:07:17.096 "state": "online", 00:07:17.096 "raid_level": "raid0", 00:07:17.096 "superblock": true, 00:07:17.096 "num_base_bdevs": 2, 00:07:17.096 "num_base_bdevs_discovered": 2, 00:07:17.096 "num_base_bdevs_operational": 2, 00:07:17.096 "base_bdevs_list": [ 00:07:17.096 { 00:07:17.096 "name": "BaseBdev1", 00:07:17.096 "uuid": "9c0d8670-8015-4469-a56b-cb4895202754", 00:07:17.096 "is_configured": true, 00:07:17.096 "data_offset": 2048, 00:07:17.096 "data_size": 63488 00:07:17.096 }, 00:07:17.096 { 00:07:17.096 "name": "BaseBdev2", 00:07:17.096 "uuid": "7e542ce6-08e1-476a-98b8-57d69f316563", 00:07:17.096 "is_configured": true, 00:07:17.096 "data_offset": 2048, 00:07:17.096 "data_size": 63488 00:07:17.096 } 00:07:17.096 ] 00:07:17.096 }' 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.096 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.664 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:17.664 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:17.664 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:17.664 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:17.664 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:17.664 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:17.664 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:17.664 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:17.664 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.664 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.664 [2024-11-15 10:35:38.618101] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.664 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.664 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:17.664 "name": "Existed_Raid", 00:07:17.664 "aliases": [ 00:07:17.664 "8b179bfa-21cd-4305-8cb5-5b67ed7d586d" 00:07:17.664 ], 00:07:17.664 "product_name": "Raid Volume", 00:07:17.664 "block_size": 512, 00:07:17.664 "num_blocks": 126976, 00:07:17.664 "uuid": "8b179bfa-21cd-4305-8cb5-5b67ed7d586d", 00:07:17.664 "assigned_rate_limits": { 00:07:17.664 "rw_ios_per_sec": 0, 00:07:17.664 "rw_mbytes_per_sec": 0, 00:07:17.664 "r_mbytes_per_sec": 0, 00:07:17.664 "w_mbytes_per_sec": 0 00:07:17.664 }, 00:07:17.664 "claimed": false, 00:07:17.664 "zoned": false, 00:07:17.664 "supported_io_types": { 00:07:17.664 "read": true, 00:07:17.664 "write": true, 00:07:17.664 "unmap": true, 00:07:17.664 "flush": true, 00:07:17.664 "reset": true, 00:07:17.664 "nvme_admin": false, 00:07:17.664 "nvme_io": false, 00:07:17.664 "nvme_io_md": false, 00:07:17.665 "write_zeroes": true, 00:07:17.665 "zcopy": false, 00:07:17.665 "get_zone_info": false, 00:07:17.665 "zone_management": false, 00:07:17.665 "zone_append": false, 00:07:17.665 "compare": false, 00:07:17.665 "compare_and_write": false, 00:07:17.665 "abort": false, 00:07:17.665 "seek_hole": false, 00:07:17.665 "seek_data": false, 00:07:17.665 "copy": false, 00:07:17.665 "nvme_iov_md": false 00:07:17.665 }, 00:07:17.665 "memory_domains": [ 00:07:17.665 { 00:07:17.665 "dma_device_id": "system", 00:07:17.665 "dma_device_type": 1 00:07:17.665 }, 00:07:17.665 { 00:07:17.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.665 "dma_device_type": 2 00:07:17.665 }, 00:07:17.665 { 00:07:17.665 "dma_device_id": "system", 00:07:17.665 "dma_device_type": 1 00:07:17.665 }, 00:07:17.665 { 00:07:17.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.665 "dma_device_type": 2 00:07:17.665 } 00:07:17.665 ], 00:07:17.665 "driver_specific": { 00:07:17.665 "raid": { 00:07:17.665 "uuid": "8b179bfa-21cd-4305-8cb5-5b67ed7d586d", 00:07:17.665 "strip_size_kb": 64, 00:07:17.665 "state": "online", 00:07:17.665 "raid_level": "raid0", 00:07:17.665 "superblock": true, 00:07:17.665 "num_base_bdevs": 2, 00:07:17.665 "num_base_bdevs_discovered": 2, 00:07:17.665 "num_base_bdevs_operational": 2, 00:07:17.665 "base_bdevs_list": [ 00:07:17.665 { 00:07:17.665 "name": "BaseBdev1", 00:07:17.665 "uuid": "9c0d8670-8015-4469-a56b-cb4895202754", 00:07:17.665 "is_configured": true, 00:07:17.665 "data_offset": 2048, 00:07:17.665 "data_size": 63488 00:07:17.665 }, 00:07:17.665 { 00:07:17.665 "name": "BaseBdev2", 00:07:17.665 "uuid": "7e542ce6-08e1-476a-98b8-57d69f316563", 00:07:17.665 "is_configured": true, 00:07:17.665 "data_offset": 2048, 00:07:17.665 "data_size": 63488 00:07:17.665 } 00:07:17.665 ] 00:07:17.665 } 00:07:17.665 } 00:07:17.665 }' 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:17.665 BaseBdev2' 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.665 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.925 [2024-11-15 10:35:38.857887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:17.925 [2024-11-15 10:35:38.857935] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.925 [2024-11-15 10:35:38.858006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.925 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.925 "name": "Existed_Raid", 00:07:17.925 "uuid": "8b179bfa-21cd-4305-8cb5-5b67ed7d586d", 00:07:17.925 "strip_size_kb": 64, 00:07:17.925 "state": "offline", 00:07:17.925 "raid_level": "raid0", 00:07:17.925 "superblock": true, 00:07:17.925 "num_base_bdevs": 2, 00:07:17.925 "num_base_bdevs_discovered": 1, 00:07:17.925 "num_base_bdevs_operational": 1, 00:07:17.926 "base_bdevs_list": [ 00:07:17.926 { 00:07:17.926 "name": null, 00:07:17.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.926 "is_configured": false, 00:07:17.926 "data_offset": 0, 00:07:17.926 "data_size": 63488 00:07:17.926 }, 00:07:17.926 { 00:07:17.926 "name": "BaseBdev2", 00:07:17.926 "uuid": "7e542ce6-08e1-476a-98b8-57d69f316563", 00:07:17.926 "is_configured": true, 00:07:17.926 "data_offset": 2048, 00:07:17.926 "data_size": 63488 00:07:17.926 } 00:07:17.926 ] 00:07:17.926 }' 00:07:17.926 10:35:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.926 10:35:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.494 [2024-11-15 10:35:39.495394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:18.494 [2024-11-15 10:35:39.495465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60898 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60898 ']' 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60898 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.494 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60898 00:07:18.753 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.753 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.753 killing process with pid 60898 00:07:18.753 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60898' 00:07:18.753 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60898 00:07:18.753 [2024-11-15 10:35:39.670569] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.753 10:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60898 00:07:18.753 [2024-11-15 10:35:39.685381] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.688 10:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:19.688 00:07:19.688 real 0m5.360s 00:07:19.688 user 0m8.118s 00:07:19.688 sys 0m0.740s 00:07:19.688 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.688 10:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.688 ************************************ 00:07:19.688 END TEST raid_state_function_test_sb 00:07:19.688 ************************************ 00:07:19.688 10:35:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:19.688 10:35:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:19.688 10:35:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.688 10:35:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.688 ************************************ 00:07:19.688 START TEST raid_superblock_test 00:07:19.688 ************************************ 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61151 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61151 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61151 ']' 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.688 10:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.946 [2024-11-15 10:35:40.887739] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:07:19.946 [2024-11-15 10:35:40.887897] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61151 ] 00:07:19.946 [2024-11-15 10:35:41.063351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.204 [2024-11-15 10:35:41.197544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.511 [2024-11-15 10:35:41.403397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.511 [2024-11-15 10:35:41.403479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.770 malloc1 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.770 [2024-11-15 10:35:41.913131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:20.770 [2024-11-15 10:35:41.913209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.770 [2024-11-15 10:35:41.913246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:20.770 [2024-11-15 10:35:41.913264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.770 [2024-11-15 10:35:41.916061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.770 [2024-11-15 10:35:41.916108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:20.770 pt1 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.770 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.029 malloc2 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.029 [2024-11-15 10:35:41.969521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:21.029 [2024-11-15 10:35:41.969589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.029 [2024-11-15 10:35:41.969621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:21.029 [2024-11-15 10:35:41.969636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.029 [2024-11-15 10:35:41.972355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.029 [2024-11-15 10:35:41.972409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:21.029 pt2 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.029 [2024-11-15 10:35:41.981592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:21.029 [2024-11-15 10:35:41.984045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:21.029 [2024-11-15 10:35:41.984263] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:21.029 [2024-11-15 10:35:41.984284] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:21.029 [2024-11-15 10:35:41.984630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:21.029 [2024-11-15 10:35:41.984851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:21.029 [2024-11-15 10:35:41.984885] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:21.029 [2024-11-15 10:35:41.985068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.029 10:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.029 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.029 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.029 "name": "raid_bdev1", 00:07:21.029 "uuid": "f76cb1f5-a776-47cf-9026-ced925ae6de3", 00:07:21.029 "strip_size_kb": 64, 00:07:21.029 "state": "online", 00:07:21.029 "raid_level": "raid0", 00:07:21.029 "superblock": true, 00:07:21.029 "num_base_bdevs": 2, 00:07:21.029 "num_base_bdevs_discovered": 2, 00:07:21.029 "num_base_bdevs_operational": 2, 00:07:21.029 "base_bdevs_list": [ 00:07:21.029 { 00:07:21.029 "name": "pt1", 00:07:21.029 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.029 "is_configured": true, 00:07:21.029 "data_offset": 2048, 00:07:21.030 "data_size": 63488 00:07:21.030 }, 00:07:21.030 { 00:07:21.030 "name": "pt2", 00:07:21.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.030 "is_configured": true, 00:07:21.030 "data_offset": 2048, 00:07:21.030 "data_size": 63488 00:07:21.030 } 00:07:21.030 ] 00:07:21.030 }' 00:07:21.030 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.030 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.596 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:21.596 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:21.596 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:21.596 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:21.596 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:21.596 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:21.596 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:21.596 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.596 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.596 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:21.596 [2024-11-15 10:35:42.474028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.596 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.596 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.596 "name": "raid_bdev1", 00:07:21.596 "aliases": [ 00:07:21.596 "f76cb1f5-a776-47cf-9026-ced925ae6de3" 00:07:21.596 ], 00:07:21.596 "product_name": "Raid Volume", 00:07:21.596 "block_size": 512, 00:07:21.596 "num_blocks": 126976, 00:07:21.596 "uuid": "f76cb1f5-a776-47cf-9026-ced925ae6de3", 00:07:21.596 "assigned_rate_limits": { 00:07:21.596 "rw_ios_per_sec": 0, 00:07:21.597 "rw_mbytes_per_sec": 0, 00:07:21.597 "r_mbytes_per_sec": 0, 00:07:21.597 "w_mbytes_per_sec": 0 00:07:21.597 }, 00:07:21.597 "claimed": false, 00:07:21.597 "zoned": false, 00:07:21.597 "supported_io_types": { 00:07:21.597 "read": true, 00:07:21.597 "write": true, 00:07:21.597 "unmap": true, 00:07:21.597 "flush": true, 00:07:21.597 "reset": true, 00:07:21.597 "nvme_admin": false, 00:07:21.597 "nvme_io": false, 00:07:21.597 "nvme_io_md": false, 00:07:21.597 "write_zeroes": true, 00:07:21.597 "zcopy": false, 00:07:21.597 "get_zone_info": false, 00:07:21.597 "zone_management": false, 00:07:21.597 "zone_append": false, 00:07:21.597 "compare": false, 00:07:21.597 "compare_and_write": false, 00:07:21.597 "abort": false, 00:07:21.597 "seek_hole": false, 00:07:21.597 "seek_data": false, 00:07:21.597 "copy": false, 00:07:21.597 "nvme_iov_md": false 00:07:21.597 }, 00:07:21.597 "memory_domains": [ 00:07:21.597 { 00:07:21.597 "dma_device_id": "system", 00:07:21.597 "dma_device_type": 1 00:07:21.597 }, 00:07:21.597 { 00:07:21.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.597 "dma_device_type": 2 00:07:21.597 }, 00:07:21.597 { 00:07:21.597 "dma_device_id": "system", 00:07:21.597 "dma_device_type": 1 00:07:21.597 }, 00:07:21.597 { 00:07:21.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.597 "dma_device_type": 2 00:07:21.597 } 00:07:21.597 ], 00:07:21.597 "driver_specific": { 00:07:21.597 "raid": { 00:07:21.597 "uuid": "f76cb1f5-a776-47cf-9026-ced925ae6de3", 00:07:21.597 "strip_size_kb": 64, 00:07:21.597 "state": "online", 00:07:21.597 "raid_level": "raid0", 00:07:21.597 "superblock": true, 00:07:21.597 "num_base_bdevs": 2, 00:07:21.597 "num_base_bdevs_discovered": 2, 00:07:21.597 "num_base_bdevs_operational": 2, 00:07:21.597 "base_bdevs_list": [ 00:07:21.597 { 00:07:21.597 "name": "pt1", 00:07:21.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.597 "is_configured": true, 00:07:21.597 "data_offset": 2048, 00:07:21.597 "data_size": 63488 00:07:21.597 }, 00:07:21.597 { 00:07:21.597 "name": "pt2", 00:07:21.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.597 "is_configured": true, 00:07:21.597 "data_offset": 2048, 00:07:21.597 "data_size": 63488 00:07:21.597 } 00:07:21.597 ] 00:07:21.597 } 00:07:21.597 } 00:07:21.597 }' 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:21.597 pt2' 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.597 [2024-11-15 10:35:42.698028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f76cb1f5-a776-47cf-9026-ced925ae6de3 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f76cb1f5-a776-47cf-9026-ced925ae6de3 ']' 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.597 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.597 [2024-11-15 10:35:42.753708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:21.597 [2024-11-15 10:35:42.753742] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.597 [2024-11-15 10:35:42.753856] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.597 [2024-11-15 10:35:42.753922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.597 [2024-11-15 10:35:42.753942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:21.859 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.860 [2024-11-15 10:35:42.877804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:21.860 [2024-11-15 10:35:42.880264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:21.860 [2024-11-15 10:35:42.880365] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:21.860 [2024-11-15 10:35:42.880449] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:21.860 [2024-11-15 10:35:42.880477] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:21.860 [2024-11-15 10:35:42.880519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:21.860 request: 00:07:21.860 { 00:07:21.860 "name": "raid_bdev1", 00:07:21.860 "raid_level": "raid0", 00:07:21.860 "base_bdevs": [ 00:07:21.860 "malloc1", 00:07:21.860 "malloc2" 00:07:21.860 ], 00:07:21.860 "strip_size_kb": 64, 00:07:21.860 "superblock": false, 00:07:21.860 "method": "bdev_raid_create", 00:07:21.860 "req_id": 1 00:07:21.860 } 00:07:21.860 Got JSON-RPC error response 00:07:21.860 response: 00:07:21.860 { 00:07:21.860 "code": -17, 00:07:21.860 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:21.860 } 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.860 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.860 [2024-11-15 10:35:42.941762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:21.861 [2024-11-15 10:35:42.941831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.861 [2024-11-15 10:35:42.941859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:21.861 [2024-11-15 10:35:42.941877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.861 [2024-11-15 10:35:42.944708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.861 [2024-11-15 10:35:42.944760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:21.861 [2024-11-15 10:35:42.944858] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:21.861 [2024-11-15 10:35:42.944945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:21.861 pt1 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.861 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.861 "name": "raid_bdev1", 00:07:21.861 "uuid": "f76cb1f5-a776-47cf-9026-ced925ae6de3", 00:07:21.861 "strip_size_kb": 64, 00:07:21.861 "state": "configuring", 00:07:21.861 "raid_level": "raid0", 00:07:21.861 "superblock": true, 00:07:21.861 "num_base_bdevs": 2, 00:07:21.861 "num_base_bdevs_discovered": 1, 00:07:21.861 "num_base_bdevs_operational": 2, 00:07:21.861 "base_bdevs_list": [ 00:07:21.861 { 00:07:21.862 "name": "pt1", 00:07:21.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.862 "is_configured": true, 00:07:21.862 "data_offset": 2048, 00:07:21.862 "data_size": 63488 00:07:21.862 }, 00:07:21.862 { 00:07:21.862 "name": null, 00:07:21.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.862 "is_configured": false, 00:07:21.862 "data_offset": 2048, 00:07:21.862 "data_size": 63488 00:07:21.862 } 00:07:21.862 ] 00:07:21.862 }' 00:07:21.862 10:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.862 10:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.429 [2024-11-15 10:35:43.405939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:22.429 [2024-11-15 10:35:43.406029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.429 [2024-11-15 10:35:43.406062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:22.429 [2024-11-15 10:35:43.406080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.429 [2024-11-15 10:35:43.406701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.429 [2024-11-15 10:35:43.406766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:22.429 [2024-11-15 10:35:43.406873] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:22.429 [2024-11-15 10:35:43.406918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:22.429 [2024-11-15 10:35:43.407065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:22.429 [2024-11-15 10:35:43.407097] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:22.429 [2024-11-15 10:35:43.407395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:22.429 [2024-11-15 10:35:43.407617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:22.429 [2024-11-15 10:35:43.407640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:22.429 [2024-11-15 10:35:43.407813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.429 pt2 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.429 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.429 "name": "raid_bdev1", 00:07:22.429 "uuid": "f76cb1f5-a776-47cf-9026-ced925ae6de3", 00:07:22.429 "strip_size_kb": 64, 00:07:22.429 "state": "online", 00:07:22.430 "raid_level": "raid0", 00:07:22.430 "superblock": true, 00:07:22.430 "num_base_bdevs": 2, 00:07:22.430 "num_base_bdevs_discovered": 2, 00:07:22.430 "num_base_bdevs_operational": 2, 00:07:22.430 "base_bdevs_list": [ 00:07:22.430 { 00:07:22.430 "name": "pt1", 00:07:22.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:22.430 "is_configured": true, 00:07:22.430 "data_offset": 2048, 00:07:22.430 "data_size": 63488 00:07:22.430 }, 00:07:22.430 { 00:07:22.430 "name": "pt2", 00:07:22.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.430 "is_configured": true, 00:07:22.430 "data_offset": 2048, 00:07:22.430 "data_size": 63488 00:07:22.430 } 00:07:22.430 ] 00:07:22.430 }' 00:07:22.430 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.430 10:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.994 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:22.994 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:22.994 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:22.994 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:22.994 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:22.994 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:22.994 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:22.994 10:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:22.994 10:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.994 10:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.994 [2024-11-15 10:35:43.970349] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.994 10:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.994 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:22.994 "name": "raid_bdev1", 00:07:22.994 "aliases": [ 00:07:22.994 "f76cb1f5-a776-47cf-9026-ced925ae6de3" 00:07:22.994 ], 00:07:22.994 "product_name": "Raid Volume", 00:07:22.994 "block_size": 512, 00:07:22.994 "num_blocks": 126976, 00:07:22.994 "uuid": "f76cb1f5-a776-47cf-9026-ced925ae6de3", 00:07:22.994 "assigned_rate_limits": { 00:07:22.994 "rw_ios_per_sec": 0, 00:07:22.994 "rw_mbytes_per_sec": 0, 00:07:22.994 "r_mbytes_per_sec": 0, 00:07:22.994 "w_mbytes_per_sec": 0 00:07:22.994 }, 00:07:22.995 "claimed": false, 00:07:22.995 "zoned": false, 00:07:22.995 "supported_io_types": { 00:07:22.995 "read": true, 00:07:22.995 "write": true, 00:07:22.995 "unmap": true, 00:07:22.995 "flush": true, 00:07:22.995 "reset": true, 00:07:22.995 "nvme_admin": false, 00:07:22.995 "nvme_io": false, 00:07:22.995 "nvme_io_md": false, 00:07:22.995 "write_zeroes": true, 00:07:22.995 "zcopy": false, 00:07:22.995 "get_zone_info": false, 00:07:22.995 "zone_management": false, 00:07:22.995 "zone_append": false, 00:07:22.995 "compare": false, 00:07:22.995 "compare_and_write": false, 00:07:22.995 "abort": false, 00:07:22.995 "seek_hole": false, 00:07:22.995 "seek_data": false, 00:07:22.995 "copy": false, 00:07:22.995 "nvme_iov_md": false 00:07:22.995 }, 00:07:22.995 "memory_domains": [ 00:07:22.995 { 00:07:22.995 "dma_device_id": "system", 00:07:22.995 "dma_device_type": 1 00:07:22.995 }, 00:07:22.995 { 00:07:22.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.995 "dma_device_type": 2 00:07:22.995 }, 00:07:22.995 { 00:07:22.995 "dma_device_id": "system", 00:07:22.995 "dma_device_type": 1 00:07:22.995 }, 00:07:22.995 { 00:07:22.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.995 "dma_device_type": 2 00:07:22.995 } 00:07:22.995 ], 00:07:22.995 "driver_specific": { 00:07:22.995 "raid": { 00:07:22.995 "uuid": "f76cb1f5-a776-47cf-9026-ced925ae6de3", 00:07:22.995 "strip_size_kb": 64, 00:07:22.995 "state": "online", 00:07:22.995 "raid_level": "raid0", 00:07:22.995 "superblock": true, 00:07:22.995 "num_base_bdevs": 2, 00:07:22.995 "num_base_bdevs_discovered": 2, 00:07:22.995 "num_base_bdevs_operational": 2, 00:07:22.995 "base_bdevs_list": [ 00:07:22.995 { 00:07:22.995 "name": "pt1", 00:07:22.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:22.995 "is_configured": true, 00:07:22.995 "data_offset": 2048, 00:07:22.995 "data_size": 63488 00:07:22.995 }, 00:07:22.995 { 00:07:22.995 "name": "pt2", 00:07:22.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.995 "is_configured": true, 00:07:22.995 "data_offset": 2048, 00:07:22.995 "data_size": 63488 00:07:22.995 } 00:07:22.995 ] 00:07:22.995 } 00:07:22.995 } 00:07:22.995 }' 00:07:22.995 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:22.995 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:22.995 pt2' 00:07:22.995 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.995 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:22.995 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.995 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.995 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:22.995 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.995 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.995 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.253 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.253 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.253 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.253 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:23.253 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.253 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.253 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.253 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.253 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.253 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.254 [2024-11-15 10:35:44.210407] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f76cb1f5-a776-47cf-9026-ced925ae6de3 '!=' f76cb1f5-a776-47cf-9026-ced925ae6de3 ']' 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61151 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61151 ']' 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61151 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61151 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61151' 00:07:23.254 killing process with pid 61151 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61151 00:07:23.254 [2024-11-15 10:35:44.278894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.254 10:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61151 00:07:23.254 [2024-11-15 10:35:44.279013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.254 [2024-11-15 10:35:44.279081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.254 [2024-11-15 10:35:44.279102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:23.567 [2024-11-15 10:35:44.465991] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.514 10:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:24.514 00:07:24.514 real 0m4.733s 00:07:24.514 user 0m6.953s 00:07:24.514 sys 0m0.672s 00:07:24.514 10:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.514 10:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.514 ************************************ 00:07:24.514 END TEST raid_superblock_test 00:07:24.514 ************************************ 00:07:24.514 10:35:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:24.514 10:35:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:24.514 10:35:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.514 10:35:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.514 ************************************ 00:07:24.514 START TEST raid_read_error_test 00:07:24.514 ************************************ 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0TpiUxkNKZ 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61363 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61363 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61363 ']' 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.514 10:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.773 [2024-11-15 10:35:45.679784] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:07:24.773 [2024-11-15 10:35:45.679946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61363 ] 00:07:24.773 [2024-11-15 10:35:45.857117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.031 [2024-11-15 10:35:45.988314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.290 [2024-11-15 10:35:46.192766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.290 [2024-11-15 10:35:46.192852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.614 BaseBdev1_malloc 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.614 true 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.614 [2024-11-15 10:35:46.673951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:25.614 [2024-11-15 10:35:46.674024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.614 [2024-11-15 10:35:46.674054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:25.614 [2024-11-15 10:35:46.674073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.614 [2024-11-15 10:35:46.676922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.614 [2024-11-15 10:35:46.676974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:25.614 BaseBdev1 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.614 BaseBdev2_malloc 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.614 true 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.614 [2024-11-15 10:35:46.730206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:25.614 [2024-11-15 10:35:46.730278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.614 [2024-11-15 10:35:46.730307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:25.614 [2024-11-15 10:35:46.730324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.614 [2024-11-15 10:35:46.733141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.614 [2024-11-15 10:35:46.733194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:25.614 BaseBdev2 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.614 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.615 [2024-11-15 10:35:46.738284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.615 [2024-11-15 10:35:46.740744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:25.615 [2024-11-15 10:35:46.741005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:25.615 [2024-11-15 10:35:46.741042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:25.615 [2024-11-15 10:35:46.741349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:25.615 [2024-11-15 10:35:46.741629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:25.615 [2024-11-15 10:35:46.741661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:25.615 [2024-11-15 10:35:46.741865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.615 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.873 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.873 "name": "raid_bdev1", 00:07:25.873 "uuid": "e4638774-b443-40c3-a719-c487bf86ab30", 00:07:25.873 "strip_size_kb": 64, 00:07:25.873 "state": "online", 00:07:25.873 "raid_level": "raid0", 00:07:25.873 "superblock": true, 00:07:25.873 "num_base_bdevs": 2, 00:07:25.873 "num_base_bdevs_discovered": 2, 00:07:25.873 "num_base_bdevs_operational": 2, 00:07:25.873 "base_bdevs_list": [ 00:07:25.873 { 00:07:25.873 "name": "BaseBdev1", 00:07:25.873 "uuid": "8c8cca8e-2f99-579e-aa7c-205e9ef95787", 00:07:25.873 "is_configured": true, 00:07:25.873 "data_offset": 2048, 00:07:25.873 "data_size": 63488 00:07:25.873 }, 00:07:25.873 { 00:07:25.873 "name": "BaseBdev2", 00:07:25.873 "uuid": "c77c9ff9-5141-546b-87c0-3925c1c79b4d", 00:07:25.873 "is_configured": true, 00:07:25.873 "data_offset": 2048, 00:07:25.873 "data_size": 63488 00:07:25.873 } 00:07:25.873 ] 00:07:25.873 }' 00:07:25.873 10:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.873 10:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.131 10:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:26.131 10:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:26.389 [2024-11-15 10:35:47.339829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.429 "name": "raid_bdev1", 00:07:27.429 "uuid": "e4638774-b443-40c3-a719-c487bf86ab30", 00:07:27.429 "strip_size_kb": 64, 00:07:27.429 "state": "online", 00:07:27.429 "raid_level": "raid0", 00:07:27.429 "superblock": true, 00:07:27.429 "num_base_bdevs": 2, 00:07:27.429 "num_base_bdevs_discovered": 2, 00:07:27.429 "num_base_bdevs_operational": 2, 00:07:27.429 "base_bdevs_list": [ 00:07:27.429 { 00:07:27.429 "name": "BaseBdev1", 00:07:27.429 "uuid": "8c8cca8e-2f99-579e-aa7c-205e9ef95787", 00:07:27.429 "is_configured": true, 00:07:27.429 "data_offset": 2048, 00:07:27.429 "data_size": 63488 00:07:27.429 }, 00:07:27.429 { 00:07:27.429 "name": "BaseBdev2", 00:07:27.429 "uuid": "c77c9ff9-5141-546b-87c0-3925c1c79b4d", 00:07:27.429 "is_configured": true, 00:07:27.429 "data_offset": 2048, 00:07:27.429 "data_size": 63488 00:07:27.429 } 00:07:27.429 ] 00:07:27.429 }' 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.429 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.688 [2024-11-15 10:35:48.682932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.688 [2024-11-15 10:35:48.682981] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.688 [2024-11-15 10:35:48.686322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.688 [2024-11-15 10:35:48.686399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.688 [2024-11-15 10:35:48.686457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.688 [2024-11-15 10:35:48.686483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:27.688 { 00:07:27.688 "results": [ 00:07:27.688 { 00:07:27.688 "job": "raid_bdev1", 00:07:27.688 "core_mask": "0x1", 00:07:27.688 "workload": "randrw", 00:07:27.688 "percentage": 50, 00:07:27.688 "status": "finished", 00:07:27.688 "queue_depth": 1, 00:07:27.688 "io_size": 131072, 00:07:27.688 "runtime": 1.340669, 00:07:27.688 "iops": 10479.096630115264, 00:07:27.688 "mibps": 1309.887078764408, 00:07:27.688 "io_failed": 1, 00:07:27.688 "io_timeout": 0, 00:07:27.688 "avg_latency_us": 133.48297172436105, 00:07:27.688 "min_latency_us": 43.985454545454544, 00:07:27.688 "max_latency_us": 1876.7127272727273 00:07:27.688 } 00:07:27.688 ], 00:07:27.688 "core_count": 1 00:07:27.688 } 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61363 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61363 ']' 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61363 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61363 00:07:27.688 killing process with pid 61363 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61363' 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61363 00:07:27.688 [2024-11-15 10:35:48.724678] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.688 10:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61363 00:07:27.947 [2024-11-15 10:35:48.855079] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.883 10:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0TpiUxkNKZ 00:07:28.883 10:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:28.883 10:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:28.883 10:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:28.883 10:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:28.883 10:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:28.883 10:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:28.883 10:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:28.883 00:07:28.883 real 0m4.409s 00:07:28.883 user 0m5.443s 00:07:28.883 sys 0m0.556s 00:07:28.883 10:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.883 10:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.883 ************************************ 00:07:28.883 END TEST raid_read_error_test 00:07:28.883 ************************************ 00:07:28.883 10:35:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:28.883 10:35:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:28.883 10:35:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.883 10:35:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.883 ************************************ 00:07:28.883 START TEST raid_write_error_test 00:07:28.883 ************************************ 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:28.883 10:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eCDKtxMPmx 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61503 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61503 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61503 ']' 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.883 10:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.142 [2024-11-15 10:35:50.104411] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:07:29.142 [2024-11-15 10:35:50.104620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61503 ] 00:07:29.142 [2024-11-15 10:35:50.285882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.400 [2024-11-15 10:35:50.442633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.659 [2024-11-15 10:35:50.683855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.659 [2024-11-15 10:35:50.683915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.251 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.251 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:30.251 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:30.251 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:30.251 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.251 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.251 BaseBdev1_malloc 00:07:30.251 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.251 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:30.251 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.251 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.251 true 00:07:30.251 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.251 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:30.251 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.252 [2024-11-15 10:35:51.146880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:30.252 [2024-11-15 10:35:51.146948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.252 [2024-11-15 10:35:51.146977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:30.252 [2024-11-15 10:35:51.147001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.252 [2024-11-15 10:35:51.149837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.252 [2024-11-15 10:35:51.149889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:30.252 BaseBdev1 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.252 BaseBdev2_malloc 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.252 true 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.252 [2024-11-15 10:35:51.203462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:30.252 [2024-11-15 10:35:51.203544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.252 [2024-11-15 10:35:51.203570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:30.252 [2024-11-15 10:35:51.203587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.252 [2024-11-15 10:35:51.206384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.252 [2024-11-15 10:35:51.206437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:30.252 BaseBdev2 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.252 [2024-11-15 10:35:51.211567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:30.252 [2024-11-15 10:35:51.214030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:30.252 [2024-11-15 10:35:51.214305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:30.252 [2024-11-15 10:35:51.214331] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:30.252 [2024-11-15 10:35:51.214657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:30.252 [2024-11-15 10:35:51.214892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:30.252 [2024-11-15 10:35:51.214918] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:30.252 [2024-11-15 10:35:51.215109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.252 "name": "raid_bdev1", 00:07:30.252 "uuid": "d8ab3863-0aba-47cf-80e2-51dcfec085bd", 00:07:30.252 "strip_size_kb": 64, 00:07:30.252 "state": "online", 00:07:30.252 "raid_level": "raid0", 00:07:30.252 "superblock": true, 00:07:30.252 "num_base_bdevs": 2, 00:07:30.252 "num_base_bdevs_discovered": 2, 00:07:30.252 "num_base_bdevs_operational": 2, 00:07:30.252 "base_bdevs_list": [ 00:07:30.252 { 00:07:30.252 "name": "BaseBdev1", 00:07:30.252 "uuid": "04749843-a0cd-50b5-9cfb-3be8f61efdb1", 00:07:30.252 "is_configured": true, 00:07:30.252 "data_offset": 2048, 00:07:30.252 "data_size": 63488 00:07:30.252 }, 00:07:30.252 { 00:07:30.252 "name": "BaseBdev2", 00:07:30.252 "uuid": "bae5f5ec-2303-5f40-a748-65d875e733d0", 00:07:30.252 "is_configured": true, 00:07:30.252 "data_offset": 2048, 00:07:30.252 "data_size": 63488 00:07:30.252 } 00:07:30.252 ] 00:07:30.252 }' 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.252 10:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.833 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:30.833 10:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:30.834 [2024-11-15 10:35:51.869176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.769 "name": "raid_bdev1", 00:07:31.769 "uuid": "d8ab3863-0aba-47cf-80e2-51dcfec085bd", 00:07:31.769 "strip_size_kb": 64, 00:07:31.769 "state": "online", 00:07:31.769 "raid_level": "raid0", 00:07:31.769 "superblock": true, 00:07:31.769 "num_base_bdevs": 2, 00:07:31.769 "num_base_bdevs_discovered": 2, 00:07:31.769 "num_base_bdevs_operational": 2, 00:07:31.769 "base_bdevs_list": [ 00:07:31.769 { 00:07:31.769 "name": "BaseBdev1", 00:07:31.769 "uuid": "04749843-a0cd-50b5-9cfb-3be8f61efdb1", 00:07:31.769 "is_configured": true, 00:07:31.769 "data_offset": 2048, 00:07:31.769 "data_size": 63488 00:07:31.769 }, 00:07:31.769 { 00:07:31.769 "name": "BaseBdev2", 00:07:31.769 "uuid": "bae5f5ec-2303-5f40-a748-65d875e733d0", 00:07:31.769 "is_configured": true, 00:07:31.769 "data_offset": 2048, 00:07:31.769 "data_size": 63488 00:07:31.769 } 00:07:31.769 ] 00:07:31.769 }' 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.769 10:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.337 10:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:32.337 10:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.337 10:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.337 [2024-11-15 10:35:53.286995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:32.337 [2024-11-15 10:35:53.287183] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.338 [2024-11-15 10:35:53.290649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.338 [2024-11-15 10:35:53.290830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.338 [2024-11-15 10:35:53.290889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.338 [2024-11-15 10:35:53.290910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:32.338 { 00:07:32.338 "results": [ 00:07:32.338 { 00:07:32.338 "job": "raid_bdev1", 00:07:32.338 "core_mask": "0x1", 00:07:32.338 "workload": "randrw", 00:07:32.338 "percentage": 50, 00:07:32.338 "status": "finished", 00:07:32.338 "queue_depth": 1, 00:07:32.338 "io_size": 131072, 00:07:32.338 "runtime": 1.415544, 00:07:32.338 "iops": 11074.894174960298, 00:07:32.338 "mibps": 1384.3617718700373, 00:07:32.338 "io_failed": 1, 00:07:32.338 "io_timeout": 0, 00:07:32.338 "avg_latency_us": 126.02446044834106, 00:07:32.338 "min_latency_us": 42.35636363636364, 00:07:32.338 "max_latency_us": 1899.0545454545454 00:07:32.338 } 00:07:32.338 ], 00:07:32.338 "core_count": 1 00:07:32.338 } 00:07:32.338 10:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.338 10:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61503 00:07:32.338 10:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61503 ']' 00:07:32.338 10:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61503 00:07:32.338 10:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:32.338 10:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.338 10:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61503 00:07:32.338 killing process with pid 61503 00:07:32.338 10:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.338 10:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.338 10:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61503' 00:07:32.338 10:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61503 00:07:32.338 [2024-11-15 10:35:53.325599] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.338 10:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61503 00:07:32.338 [2024-11-15 10:35:53.444882] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:33.768 10:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eCDKtxMPmx 00:07:33.768 10:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:33.768 10:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:33.768 ************************************ 00:07:33.768 END TEST raid_write_error_test 00:07:33.768 ************************************ 00:07:33.768 10:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:33.768 10:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:33.768 10:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:33.768 10:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:33.768 10:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:33.768 00:07:33.768 real 0m4.531s 00:07:33.768 user 0m5.716s 00:07:33.768 sys 0m0.551s 00:07:33.768 10:35:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.768 10:35:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.768 10:35:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:33.768 10:35:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:33.768 10:35:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:33.768 10:35:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.768 10:35:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.768 ************************************ 00:07:33.768 START TEST raid_state_function_test 00:07:33.768 ************************************ 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:33.768 Process raid pid: 61652 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61652 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61652' 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61652 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61652 ']' 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.768 10:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.768 [2024-11-15 10:35:54.683374] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:07:33.768 [2024-11-15 10:35:54.683570] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.768 [2024-11-15 10:35:54.873916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.027 [2024-11-15 10:35:55.008543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.286 [2024-11-15 10:35:55.219467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.286 [2024-11-15 10:35:55.219533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.853 [2024-11-15 10:35:55.710161] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:34.853 [2024-11-15 10:35:55.710256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:34.853 [2024-11-15 10:35:55.710282] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:34.853 [2024-11-15 10:35:55.710305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.853 "name": "Existed_Raid", 00:07:34.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.853 "strip_size_kb": 64, 00:07:34.853 "state": "configuring", 00:07:34.853 "raid_level": "concat", 00:07:34.853 "superblock": false, 00:07:34.853 "num_base_bdevs": 2, 00:07:34.853 "num_base_bdevs_discovered": 0, 00:07:34.853 "num_base_bdevs_operational": 2, 00:07:34.853 "base_bdevs_list": [ 00:07:34.853 { 00:07:34.853 "name": "BaseBdev1", 00:07:34.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.853 "is_configured": false, 00:07:34.853 "data_offset": 0, 00:07:34.853 "data_size": 0 00:07:34.853 }, 00:07:34.853 { 00:07:34.853 "name": "BaseBdev2", 00:07:34.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.853 "is_configured": false, 00:07:34.853 "data_offset": 0, 00:07:34.853 "data_size": 0 00:07:34.853 } 00:07:34.853 ] 00:07:34.853 }' 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.853 10:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.111 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:35.111 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.111 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.111 [2024-11-15 10:35:56.234228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:35.111 [2024-11-15 10:35:56.234271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:35.111 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.111 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:35.112 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.112 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.112 [2024-11-15 10:35:56.246208] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:35.112 [2024-11-15 10:35:56.246401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:35.112 [2024-11-15 10:35:56.246428] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:35.112 [2024-11-15 10:35:56.246450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:35.112 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.112 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:35.112 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.112 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.370 [2024-11-15 10:35:56.291575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.370 BaseBdev1 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.370 [ 00:07:35.370 { 00:07:35.370 "name": "BaseBdev1", 00:07:35.370 "aliases": [ 00:07:35.370 "5a3f5834-10c2-4b5a-a966-628b1824f3fa" 00:07:35.370 ], 00:07:35.370 "product_name": "Malloc disk", 00:07:35.370 "block_size": 512, 00:07:35.370 "num_blocks": 65536, 00:07:35.370 "uuid": "5a3f5834-10c2-4b5a-a966-628b1824f3fa", 00:07:35.370 "assigned_rate_limits": { 00:07:35.370 "rw_ios_per_sec": 0, 00:07:35.370 "rw_mbytes_per_sec": 0, 00:07:35.370 "r_mbytes_per_sec": 0, 00:07:35.370 "w_mbytes_per_sec": 0 00:07:35.370 }, 00:07:35.370 "claimed": true, 00:07:35.370 "claim_type": "exclusive_write", 00:07:35.370 "zoned": false, 00:07:35.370 "supported_io_types": { 00:07:35.370 "read": true, 00:07:35.370 "write": true, 00:07:35.370 "unmap": true, 00:07:35.370 "flush": true, 00:07:35.370 "reset": true, 00:07:35.370 "nvme_admin": false, 00:07:35.370 "nvme_io": false, 00:07:35.370 "nvme_io_md": false, 00:07:35.370 "write_zeroes": true, 00:07:35.370 "zcopy": true, 00:07:35.370 "get_zone_info": false, 00:07:35.370 "zone_management": false, 00:07:35.370 "zone_append": false, 00:07:35.370 "compare": false, 00:07:35.370 "compare_and_write": false, 00:07:35.370 "abort": true, 00:07:35.370 "seek_hole": false, 00:07:35.370 "seek_data": false, 00:07:35.370 "copy": true, 00:07:35.370 "nvme_iov_md": false 00:07:35.370 }, 00:07:35.370 "memory_domains": [ 00:07:35.370 { 00:07:35.370 "dma_device_id": "system", 00:07:35.370 "dma_device_type": 1 00:07:35.370 }, 00:07:35.370 { 00:07:35.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.370 "dma_device_type": 2 00:07:35.370 } 00:07:35.370 ], 00:07:35.370 "driver_specific": {} 00:07:35.370 } 00:07:35.370 ] 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.370 "name": "Existed_Raid", 00:07:35.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.370 "strip_size_kb": 64, 00:07:35.370 "state": "configuring", 00:07:35.370 "raid_level": "concat", 00:07:35.370 "superblock": false, 00:07:35.370 "num_base_bdevs": 2, 00:07:35.370 "num_base_bdevs_discovered": 1, 00:07:35.370 "num_base_bdevs_operational": 2, 00:07:35.370 "base_bdevs_list": [ 00:07:35.370 { 00:07:35.370 "name": "BaseBdev1", 00:07:35.370 "uuid": "5a3f5834-10c2-4b5a-a966-628b1824f3fa", 00:07:35.370 "is_configured": true, 00:07:35.370 "data_offset": 0, 00:07:35.370 "data_size": 65536 00:07:35.370 }, 00:07:35.370 { 00:07:35.370 "name": "BaseBdev2", 00:07:35.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.370 "is_configured": false, 00:07:35.370 "data_offset": 0, 00:07:35.370 "data_size": 0 00:07:35.370 } 00:07:35.370 ] 00:07:35.370 }' 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.370 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.938 [2024-11-15 10:35:56.839789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:35.938 [2024-11-15 10:35:56.839855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.938 [2024-11-15 10:35:56.851836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.938 [2024-11-15 10:35:56.854335] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:35.938 [2024-11-15 10:35:56.854513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.938 "name": "Existed_Raid", 00:07:35.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.938 "strip_size_kb": 64, 00:07:35.938 "state": "configuring", 00:07:35.938 "raid_level": "concat", 00:07:35.938 "superblock": false, 00:07:35.938 "num_base_bdevs": 2, 00:07:35.938 "num_base_bdevs_discovered": 1, 00:07:35.938 "num_base_bdevs_operational": 2, 00:07:35.938 "base_bdevs_list": [ 00:07:35.938 { 00:07:35.938 "name": "BaseBdev1", 00:07:35.938 "uuid": "5a3f5834-10c2-4b5a-a966-628b1824f3fa", 00:07:35.938 "is_configured": true, 00:07:35.938 "data_offset": 0, 00:07:35.938 "data_size": 65536 00:07:35.938 }, 00:07:35.938 { 00:07:35.938 "name": "BaseBdev2", 00:07:35.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.938 "is_configured": false, 00:07:35.938 "data_offset": 0, 00:07:35.938 "data_size": 0 00:07:35.938 } 00:07:35.938 ] 00:07:35.938 }' 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.938 10:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.197 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:36.197 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.197 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.455 [2024-11-15 10:35:57.386915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:36.455 [2024-11-15 10:35:57.386989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:36.455 [2024-11-15 10:35:57.387002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:36.455 [2024-11-15 10:35:57.387347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:36.455 [2024-11-15 10:35:57.387581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:36.455 [2024-11-15 10:35:57.387606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:36.455 [2024-11-15 10:35:57.387935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.455 BaseBdev2 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.455 [ 00:07:36.455 { 00:07:36.455 "name": "BaseBdev2", 00:07:36.455 "aliases": [ 00:07:36.455 "fccf1f26-4a0f-4e2a-bf8d-1a4ebfce3523" 00:07:36.455 ], 00:07:36.455 "product_name": "Malloc disk", 00:07:36.455 "block_size": 512, 00:07:36.455 "num_blocks": 65536, 00:07:36.455 "uuid": "fccf1f26-4a0f-4e2a-bf8d-1a4ebfce3523", 00:07:36.455 "assigned_rate_limits": { 00:07:36.455 "rw_ios_per_sec": 0, 00:07:36.455 "rw_mbytes_per_sec": 0, 00:07:36.455 "r_mbytes_per_sec": 0, 00:07:36.455 "w_mbytes_per_sec": 0 00:07:36.455 }, 00:07:36.455 "claimed": true, 00:07:36.455 "claim_type": "exclusive_write", 00:07:36.455 "zoned": false, 00:07:36.455 "supported_io_types": { 00:07:36.455 "read": true, 00:07:36.455 "write": true, 00:07:36.455 "unmap": true, 00:07:36.455 "flush": true, 00:07:36.455 "reset": true, 00:07:36.455 "nvme_admin": false, 00:07:36.455 "nvme_io": false, 00:07:36.455 "nvme_io_md": false, 00:07:36.455 "write_zeroes": true, 00:07:36.455 "zcopy": true, 00:07:36.455 "get_zone_info": false, 00:07:36.455 "zone_management": false, 00:07:36.455 "zone_append": false, 00:07:36.455 "compare": false, 00:07:36.455 "compare_and_write": false, 00:07:36.455 "abort": true, 00:07:36.455 "seek_hole": false, 00:07:36.455 "seek_data": false, 00:07:36.455 "copy": true, 00:07:36.455 "nvme_iov_md": false 00:07:36.455 }, 00:07:36.455 "memory_domains": [ 00:07:36.455 { 00:07:36.455 "dma_device_id": "system", 00:07:36.455 "dma_device_type": 1 00:07:36.455 }, 00:07:36.455 { 00:07:36.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.455 "dma_device_type": 2 00:07:36.455 } 00:07:36.455 ], 00:07:36.455 "driver_specific": {} 00:07:36.455 } 00:07:36.455 ] 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:36.455 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.456 "name": "Existed_Raid", 00:07:36.456 "uuid": "7a8bb265-70f9-4e0f-bc64-87732bd132e0", 00:07:36.456 "strip_size_kb": 64, 00:07:36.456 "state": "online", 00:07:36.456 "raid_level": "concat", 00:07:36.456 "superblock": false, 00:07:36.456 "num_base_bdevs": 2, 00:07:36.456 "num_base_bdevs_discovered": 2, 00:07:36.456 "num_base_bdevs_operational": 2, 00:07:36.456 "base_bdevs_list": [ 00:07:36.456 { 00:07:36.456 "name": "BaseBdev1", 00:07:36.456 "uuid": "5a3f5834-10c2-4b5a-a966-628b1824f3fa", 00:07:36.456 "is_configured": true, 00:07:36.456 "data_offset": 0, 00:07:36.456 "data_size": 65536 00:07:36.456 }, 00:07:36.456 { 00:07:36.456 "name": "BaseBdev2", 00:07:36.456 "uuid": "fccf1f26-4a0f-4e2a-bf8d-1a4ebfce3523", 00:07:36.456 "is_configured": true, 00:07:36.456 "data_offset": 0, 00:07:36.456 "data_size": 65536 00:07:36.456 } 00:07:36.456 ] 00:07:36.456 }' 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.456 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.047 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:37.047 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:37.047 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:37.047 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:37.047 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:37.047 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:37.047 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:37.047 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.047 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.047 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:37.047 [2024-11-15 10:35:57.951456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.047 10:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.047 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:37.047 "name": "Existed_Raid", 00:07:37.047 "aliases": [ 00:07:37.047 "7a8bb265-70f9-4e0f-bc64-87732bd132e0" 00:07:37.047 ], 00:07:37.047 "product_name": "Raid Volume", 00:07:37.047 "block_size": 512, 00:07:37.047 "num_blocks": 131072, 00:07:37.047 "uuid": "7a8bb265-70f9-4e0f-bc64-87732bd132e0", 00:07:37.047 "assigned_rate_limits": { 00:07:37.047 "rw_ios_per_sec": 0, 00:07:37.047 "rw_mbytes_per_sec": 0, 00:07:37.047 "r_mbytes_per_sec": 0, 00:07:37.047 "w_mbytes_per_sec": 0 00:07:37.047 }, 00:07:37.047 "claimed": false, 00:07:37.047 "zoned": false, 00:07:37.047 "supported_io_types": { 00:07:37.047 "read": true, 00:07:37.047 "write": true, 00:07:37.047 "unmap": true, 00:07:37.047 "flush": true, 00:07:37.047 "reset": true, 00:07:37.047 "nvme_admin": false, 00:07:37.047 "nvme_io": false, 00:07:37.047 "nvme_io_md": false, 00:07:37.047 "write_zeroes": true, 00:07:37.047 "zcopy": false, 00:07:37.047 "get_zone_info": false, 00:07:37.047 "zone_management": false, 00:07:37.047 "zone_append": false, 00:07:37.047 "compare": false, 00:07:37.047 "compare_and_write": false, 00:07:37.047 "abort": false, 00:07:37.047 "seek_hole": false, 00:07:37.047 "seek_data": false, 00:07:37.047 "copy": false, 00:07:37.047 "nvme_iov_md": false 00:07:37.047 }, 00:07:37.047 "memory_domains": [ 00:07:37.047 { 00:07:37.047 "dma_device_id": "system", 00:07:37.047 "dma_device_type": 1 00:07:37.047 }, 00:07:37.047 { 00:07:37.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.047 "dma_device_type": 2 00:07:37.047 }, 00:07:37.047 { 00:07:37.047 "dma_device_id": "system", 00:07:37.047 "dma_device_type": 1 00:07:37.047 }, 00:07:37.047 { 00:07:37.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.047 "dma_device_type": 2 00:07:37.047 } 00:07:37.047 ], 00:07:37.047 "driver_specific": { 00:07:37.047 "raid": { 00:07:37.047 "uuid": "7a8bb265-70f9-4e0f-bc64-87732bd132e0", 00:07:37.047 "strip_size_kb": 64, 00:07:37.047 "state": "online", 00:07:37.047 "raid_level": "concat", 00:07:37.047 "superblock": false, 00:07:37.047 "num_base_bdevs": 2, 00:07:37.047 "num_base_bdevs_discovered": 2, 00:07:37.047 "num_base_bdevs_operational": 2, 00:07:37.047 "base_bdevs_list": [ 00:07:37.047 { 00:07:37.047 "name": "BaseBdev1", 00:07:37.047 "uuid": "5a3f5834-10c2-4b5a-a966-628b1824f3fa", 00:07:37.047 "is_configured": true, 00:07:37.047 "data_offset": 0, 00:07:37.047 "data_size": 65536 00:07:37.047 }, 00:07:37.047 { 00:07:37.047 "name": "BaseBdev2", 00:07:37.047 "uuid": "fccf1f26-4a0f-4e2a-bf8d-1a4ebfce3523", 00:07:37.047 "is_configured": true, 00:07:37.047 "data_offset": 0, 00:07:37.047 "data_size": 65536 00:07:37.047 } 00:07:37.047 ] 00:07:37.047 } 00:07:37.047 } 00:07:37.047 }' 00:07:37.047 10:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:37.047 BaseBdev2' 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.047 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.305 [2024-11-15 10:35:58.215245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:37.305 [2024-11-15 10:35:58.215286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.305 [2024-11-15 10:35:58.215352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.305 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.306 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.306 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.306 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.306 "name": "Existed_Raid", 00:07:37.306 "uuid": "7a8bb265-70f9-4e0f-bc64-87732bd132e0", 00:07:37.306 "strip_size_kb": 64, 00:07:37.306 "state": "offline", 00:07:37.306 "raid_level": "concat", 00:07:37.306 "superblock": false, 00:07:37.306 "num_base_bdevs": 2, 00:07:37.306 "num_base_bdevs_discovered": 1, 00:07:37.306 "num_base_bdevs_operational": 1, 00:07:37.306 "base_bdevs_list": [ 00:07:37.306 { 00:07:37.306 "name": null, 00:07:37.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.306 "is_configured": false, 00:07:37.306 "data_offset": 0, 00:07:37.306 "data_size": 65536 00:07:37.306 }, 00:07:37.306 { 00:07:37.306 "name": "BaseBdev2", 00:07:37.306 "uuid": "fccf1f26-4a0f-4e2a-bf8d-1a4ebfce3523", 00:07:37.306 "is_configured": true, 00:07:37.306 "data_offset": 0, 00:07:37.306 "data_size": 65536 00:07:37.306 } 00:07:37.306 ] 00:07:37.306 }' 00:07:37.306 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.306 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.872 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:37.872 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:37.872 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.872 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:37.872 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.872 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.872 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.872 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:37.872 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:37.872 10:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:37.872 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.872 10:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.872 [2024-11-15 10:35:58.915519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:37.872 [2024-11-15 10:35:58.915585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:37.872 10:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.872 10:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:37.872 10:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:37.872 10:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.872 10:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:37.872 10:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.872 10:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.872 10:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.132 10:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:38.132 10:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:38.132 10:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:38.132 10:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61652 00:07:38.132 10:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61652 ']' 00:07:38.132 10:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61652 00:07:38.132 10:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:38.132 10:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.132 10:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61652 00:07:38.132 killing process with pid 61652 00:07:38.132 10:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.132 10:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.132 10:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61652' 00:07:38.132 10:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61652 00:07:38.132 [2024-11-15 10:35:59.089029] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.133 10:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61652 00:07:38.133 [2024-11-15 10:35:59.103737] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.083 10:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:39.083 00:07:39.083 real 0m5.554s 00:07:39.083 user 0m8.438s 00:07:39.083 sys 0m0.769s 00:07:39.083 10:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.083 10:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.083 ************************************ 00:07:39.083 END TEST raid_state_function_test 00:07:39.083 ************************************ 00:07:39.083 10:36:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:39.083 10:36:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:39.084 10:36:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.084 10:36:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.084 ************************************ 00:07:39.084 START TEST raid_state_function_test_sb 00:07:39.084 ************************************ 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61905 00:07:39.084 Process raid pid: 61905 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61905' 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61905 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61905 ']' 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.084 10:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.342 [2024-11-15 10:36:00.320486] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:07:39.342 [2024-11-15 10:36:00.320757] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.602 [2024-11-15 10:36:00.524340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.602 [2024-11-15 10:36:00.664669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.861 [2024-11-15 10:36:00.906511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.861 [2024-11-15 10:36:00.906550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.428 [2024-11-15 10:36:01.325717] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:40.428 [2024-11-15 10:36:01.325783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:40.428 [2024-11-15 10:36:01.325801] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:40.428 [2024-11-15 10:36:01.325827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.428 "name": "Existed_Raid", 00:07:40.428 "uuid": "148cf729-a0ef-441e-a19c-75a9c0d71365", 00:07:40.428 "strip_size_kb": 64, 00:07:40.428 "state": "configuring", 00:07:40.428 "raid_level": "concat", 00:07:40.428 "superblock": true, 00:07:40.428 "num_base_bdevs": 2, 00:07:40.428 "num_base_bdevs_discovered": 0, 00:07:40.428 "num_base_bdevs_operational": 2, 00:07:40.428 "base_bdevs_list": [ 00:07:40.428 { 00:07:40.428 "name": "BaseBdev1", 00:07:40.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.428 "is_configured": false, 00:07:40.428 "data_offset": 0, 00:07:40.428 "data_size": 0 00:07:40.428 }, 00:07:40.428 { 00:07:40.428 "name": "BaseBdev2", 00:07:40.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.428 "is_configured": false, 00:07:40.428 "data_offset": 0, 00:07:40.428 "data_size": 0 00:07:40.428 } 00:07:40.428 ] 00:07:40.428 }' 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.428 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.687 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:40.687 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.687 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.945 [2024-11-15 10:36:01.845774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:40.945 [2024-11-15 10:36:01.845816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.945 [2024-11-15 10:36:01.853753] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:40.945 [2024-11-15 10:36:01.853805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:40.945 [2024-11-15 10:36:01.853820] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:40.945 [2024-11-15 10:36:01.853839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.945 [2024-11-15 10:36:01.898692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.945 BaseBdev1 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.945 [ 00:07:40.945 { 00:07:40.945 "name": "BaseBdev1", 00:07:40.945 "aliases": [ 00:07:40.945 "ca8b7045-3c12-4f33-ba96-072c1574eb07" 00:07:40.945 ], 00:07:40.945 "product_name": "Malloc disk", 00:07:40.945 "block_size": 512, 00:07:40.945 "num_blocks": 65536, 00:07:40.945 "uuid": "ca8b7045-3c12-4f33-ba96-072c1574eb07", 00:07:40.945 "assigned_rate_limits": { 00:07:40.945 "rw_ios_per_sec": 0, 00:07:40.945 "rw_mbytes_per_sec": 0, 00:07:40.945 "r_mbytes_per_sec": 0, 00:07:40.945 "w_mbytes_per_sec": 0 00:07:40.945 }, 00:07:40.945 "claimed": true, 00:07:40.945 "claim_type": "exclusive_write", 00:07:40.945 "zoned": false, 00:07:40.945 "supported_io_types": { 00:07:40.945 "read": true, 00:07:40.945 "write": true, 00:07:40.945 "unmap": true, 00:07:40.945 "flush": true, 00:07:40.945 "reset": true, 00:07:40.945 "nvme_admin": false, 00:07:40.945 "nvme_io": false, 00:07:40.945 "nvme_io_md": false, 00:07:40.945 "write_zeroes": true, 00:07:40.945 "zcopy": true, 00:07:40.945 "get_zone_info": false, 00:07:40.945 "zone_management": false, 00:07:40.945 "zone_append": false, 00:07:40.945 "compare": false, 00:07:40.945 "compare_and_write": false, 00:07:40.945 "abort": true, 00:07:40.945 "seek_hole": false, 00:07:40.945 "seek_data": false, 00:07:40.945 "copy": true, 00:07:40.945 "nvme_iov_md": false 00:07:40.945 }, 00:07:40.945 "memory_domains": [ 00:07:40.945 { 00:07:40.945 "dma_device_id": "system", 00:07:40.945 "dma_device_type": 1 00:07:40.945 }, 00:07:40.945 { 00:07:40.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.945 "dma_device_type": 2 00:07:40.945 } 00:07:40.945 ], 00:07:40.945 "driver_specific": {} 00:07:40.945 } 00:07:40.945 ] 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.945 "name": "Existed_Raid", 00:07:40.945 "uuid": "fc33e373-8374-4747-90c0-76006fabe760", 00:07:40.945 "strip_size_kb": 64, 00:07:40.945 "state": "configuring", 00:07:40.945 "raid_level": "concat", 00:07:40.945 "superblock": true, 00:07:40.945 "num_base_bdevs": 2, 00:07:40.945 "num_base_bdevs_discovered": 1, 00:07:40.945 "num_base_bdevs_operational": 2, 00:07:40.945 "base_bdevs_list": [ 00:07:40.945 { 00:07:40.945 "name": "BaseBdev1", 00:07:40.945 "uuid": "ca8b7045-3c12-4f33-ba96-072c1574eb07", 00:07:40.945 "is_configured": true, 00:07:40.945 "data_offset": 2048, 00:07:40.945 "data_size": 63488 00:07:40.945 }, 00:07:40.945 { 00:07:40.945 "name": "BaseBdev2", 00:07:40.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.945 "is_configured": false, 00:07:40.945 "data_offset": 0, 00:07:40.945 "data_size": 0 00:07:40.945 } 00:07:40.945 ] 00:07:40.945 }' 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.945 10:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.511 [2024-11-15 10:36:02.422880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.511 [2024-11-15 10:36:02.422944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.511 [2024-11-15 10:36:02.430933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.511 [2024-11-15 10:36:02.433398] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.511 [2024-11-15 10:36:02.433449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.511 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.512 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.512 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.512 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.512 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.512 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.512 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.512 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.512 10:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.512 10:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.512 10:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.512 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.512 "name": "Existed_Raid", 00:07:41.512 "uuid": "d402ddd4-463a-4e73-af37-fcc6e6001e54", 00:07:41.512 "strip_size_kb": 64, 00:07:41.512 "state": "configuring", 00:07:41.512 "raid_level": "concat", 00:07:41.512 "superblock": true, 00:07:41.512 "num_base_bdevs": 2, 00:07:41.512 "num_base_bdevs_discovered": 1, 00:07:41.512 "num_base_bdevs_operational": 2, 00:07:41.512 "base_bdevs_list": [ 00:07:41.512 { 00:07:41.512 "name": "BaseBdev1", 00:07:41.512 "uuid": "ca8b7045-3c12-4f33-ba96-072c1574eb07", 00:07:41.512 "is_configured": true, 00:07:41.512 "data_offset": 2048, 00:07:41.512 "data_size": 63488 00:07:41.512 }, 00:07:41.512 { 00:07:41.512 "name": "BaseBdev2", 00:07:41.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.512 "is_configured": false, 00:07:41.512 "data_offset": 0, 00:07:41.512 "data_size": 0 00:07:41.512 } 00:07:41.512 ] 00:07:41.512 }' 00:07:41.512 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.512 10:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.077 10:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:42.078 10:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.078 10:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.078 [2024-11-15 10:36:03.012409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.078 [2024-11-15 10:36:03.012807] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:42.078 [2024-11-15 10:36:03.012832] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.078 BaseBdev2 00:07:42.078 [2024-11-15 10:36:03.013213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:42.078 [2024-11-15 10:36:03.013444] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:42.078 [2024-11-15 10:36:03.013469] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.078 [2024-11-15 10:36:03.013684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.078 [ 00:07:42.078 { 00:07:42.078 "name": "BaseBdev2", 00:07:42.078 "aliases": [ 00:07:42.078 "4e002c12-0d99-45e8-913b-33e42b9c6594" 00:07:42.078 ], 00:07:42.078 "product_name": "Malloc disk", 00:07:42.078 "block_size": 512, 00:07:42.078 "num_blocks": 65536, 00:07:42.078 "uuid": "4e002c12-0d99-45e8-913b-33e42b9c6594", 00:07:42.078 "assigned_rate_limits": { 00:07:42.078 "rw_ios_per_sec": 0, 00:07:42.078 "rw_mbytes_per_sec": 0, 00:07:42.078 "r_mbytes_per_sec": 0, 00:07:42.078 "w_mbytes_per_sec": 0 00:07:42.078 }, 00:07:42.078 "claimed": true, 00:07:42.078 "claim_type": "exclusive_write", 00:07:42.078 "zoned": false, 00:07:42.078 "supported_io_types": { 00:07:42.078 "read": true, 00:07:42.078 "write": true, 00:07:42.078 "unmap": true, 00:07:42.078 "flush": true, 00:07:42.078 "reset": true, 00:07:42.078 "nvme_admin": false, 00:07:42.078 "nvme_io": false, 00:07:42.078 "nvme_io_md": false, 00:07:42.078 "write_zeroes": true, 00:07:42.078 "zcopy": true, 00:07:42.078 "get_zone_info": false, 00:07:42.078 "zone_management": false, 00:07:42.078 "zone_append": false, 00:07:42.078 "compare": false, 00:07:42.078 "compare_and_write": false, 00:07:42.078 "abort": true, 00:07:42.078 "seek_hole": false, 00:07:42.078 "seek_data": false, 00:07:42.078 "copy": true, 00:07:42.078 "nvme_iov_md": false 00:07:42.078 }, 00:07:42.078 "memory_domains": [ 00:07:42.078 { 00:07:42.078 "dma_device_id": "system", 00:07:42.078 "dma_device_type": 1 00:07:42.078 }, 00:07:42.078 { 00:07:42.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.078 "dma_device_type": 2 00:07:42.078 } 00:07:42.078 ], 00:07:42.078 "driver_specific": {} 00:07:42.078 } 00:07:42.078 ] 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.078 "name": "Existed_Raid", 00:07:42.078 "uuid": "d402ddd4-463a-4e73-af37-fcc6e6001e54", 00:07:42.078 "strip_size_kb": 64, 00:07:42.078 "state": "online", 00:07:42.078 "raid_level": "concat", 00:07:42.078 "superblock": true, 00:07:42.078 "num_base_bdevs": 2, 00:07:42.078 "num_base_bdevs_discovered": 2, 00:07:42.078 "num_base_bdevs_operational": 2, 00:07:42.078 "base_bdevs_list": [ 00:07:42.078 { 00:07:42.078 "name": "BaseBdev1", 00:07:42.078 "uuid": "ca8b7045-3c12-4f33-ba96-072c1574eb07", 00:07:42.078 "is_configured": true, 00:07:42.078 "data_offset": 2048, 00:07:42.078 "data_size": 63488 00:07:42.078 }, 00:07:42.078 { 00:07:42.078 "name": "BaseBdev2", 00:07:42.078 "uuid": "4e002c12-0d99-45e8-913b-33e42b9c6594", 00:07:42.078 "is_configured": true, 00:07:42.078 "data_offset": 2048, 00:07:42.078 "data_size": 63488 00:07:42.078 } 00:07:42.078 ] 00:07:42.078 }' 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.078 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.643 [2024-11-15 10:36:03.561149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:42.643 "name": "Existed_Raid", 00:07:42.643 "aliases": [ 00:07:42.643 "d402ddd4-463a-4e73-af37-fcc6e6001e54" 00:07:42.643 ], 00:07:42.643 "product_name": "Raid Volume", 00:07:42.643 "block_size": 512, 00:07:42.643 "num_blocks": 126976, 00:07:42.643 "uuid": "d402ddd4-463a-4e73-af37-fcc6e6001e54", 00:07:42.643 "assigned_rate_limits": { 00:07:42.643 "rw_ios_per_sec": 0, 00:07:42.643 "rw_mbytes_per_sec": 0, 00:07:42.643 "r_mbytes_per_sec": 0, 00:07:42.643 "w_mbytes_per_sec": 0 00:07:42.643 }, 00:07:42.643 "claimed": false, 00:07:42.643 "zoned": false, 00:07:42.643 "supported_io_types": { 00:07:42.643 "read": true, 00:07:42.643 "write": true, 00:07:42.643 "unmap": true, 00:07:42.643 "flush": true, 00:07:42.643 "reset": true, 00:07:42.643 "nvme_admin": false, 00:07:42.643 "nvme_io": false, 00:07:42.643 "nvme_io_md": false, 00:07:42.643 "write_zeroes": true, 00:07:42.643 "zcopy": false, 00:07:42.643 "get_zone_info": false, 00:07:42.643 "zone_management": false, 00:07:42.643 "zone_append": false, 00:07:42.643 "compare": false, 00:07:42.643 "compare_and_write": false, 00:07:42.643 "abort": false, 00:07:42.643 "seek_hole": false, 00:07:42.643 "seek_data": false, 00:07:42.643 "copy": false, 00:07:42.643 "nvme_iov_md": false 00:07:42.643 }, 00:07:42.643 "memory_domains": [ 00:07:42.643 { 00:07:42.643 "dma_device_id": "system", 00:07:42.643 "dma_device_type": 1 00:07:42.643 }, 00:07:42.643 { 00:07:42.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.643 "dma_device_type": 2 00:07:42.643 }, 00:07:42.643 { 00:07:42.643 "dma_device_id": "system", 00:07:42.643 "dma_device_type": 1 00:07:42.643 }, 00:07:42.643 { 00:07:42.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.643 "dma_device_type": 2 00:07:42.643 } 00:07:42.643 ], 00:07:42.643 "driver_specific": { 00:07:42.643 "raid": { 00:07:42.643 "uuid": "d402ddd4-463a-4e73-af37-fcc6e6001e54", 00:07:42.643 "strip_size_kb": 64, 00:07:42.643 "state": "online", 00:07:42.643 "raid_level": "concat", 00:07:42.643 "superblock": true, 00:07:42.643 "num_base_bdevs": 2, 00:07:42.643 "num_base_bdevs_discovered": 2, 00:07:42.643 "num_base_bdevs_operational": 2, 00:07:42.643 "base_bdevs_list": [ 00:07:42.643 { 00:07:42.643 "name": "BaseBdev1", 00:07:42.643 "uuid": "ca8b7045-3c12-4f33-ba96-072c1574eb07", 00:07:42.643 "is_configured": true, 00:07:42.643 "data_offset": 2048, 00:07:42.643 "data_size": 63488 00:07:42.643 }, 00:07:42.643 { 00:07:42.643 "name": "BaseBdev2", 00:07:42.643 "uuid": "4e002c12-0d99-45e8-913b-33e42b9c6594", 00:07:42.643 "is_configured": true, 00:07:42.643 "data_offset": 2048, 00:07:42.643 "data_size": 63488 00:07:42.643 } 00:07:42.643 ] 00:07:42.643 } 00:07:42.643 } 00:07:42.643 }' 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:42.643 BaseBdev2' 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.643 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.644 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.644 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.644 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.644 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.644 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:42.644 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.644 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.644 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.644 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.901 [2024-11-15 10:36:03.808833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:42.901 [2024-11-15 10:36:03.809040] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:42.901 [2024-11-15 10:36:03.809145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.901 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.901 "name": "Existed_Raid", 00:07:42.901 "uuid": "d402ddd4-463a-4e73-af37-fcc6e6001e54", 00:07:42.901 "strip_size_kb": 64, 00:07:42.901 "state": "offline", 00:07:42.901 "raid_level": "concat", 00:07:42.901 "superblock": true, 00:07:42.901 "num_base_bdevs": 2, 00:07:42.901 "num_base_bdevs_discovered": 1, 00:07:42.901 "num_base_bdevs_operational": 1, 00:07:42.901 "base_bdevs_list": [ 00:07:42.901 { 00:07:42.901 "name": null, 00:07:42.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.901 "is_configured": false, 00:07:42.901 "data_offset": 0, 00:07:42.901 "data_size": 63488 00:07:42.901 }, 00:07:42.901 { 00:07:42.901 "name": "BaseBdev2", 00:07:42.901 "uuid": "4e002c12-0d99-45e8-913b-33e42b9c6594", 00:07:42.901 "is_configured": true, 00:07:42.901 "data_offset": 2048, 00:07:42.901 "data_size": 63488 00:07:42.901 } 00:07:42.901 ] 00:07:42.901 }' 00:07:42.902 10:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.902 10:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.466 [2024-11-15 10:36:04.503901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:43.466 [2024-11-15 10:36:04.504119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:43.466 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.723 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:43.723 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:43.723 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:43.723 10:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61905 00:07:43.723 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61905 ']' 00:07:43.723 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61905 00:07:43.723 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:43.723 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.723 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61905 00:07:43.723 killing process with pid 61905 00:07:43.723 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.723 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.723 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61905' 00:07:43.723 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61905 00:07:43.723 [2024-11-15 10:36:04.677482] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.723 10:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61905 00:07:43.723 [2024-11-15 10:36:04.692538] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.655 10:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:44.655 00:07:44.656 real 0m5.538s 00:07:44.656 user 0m8.301s 00:07:44.656 sys 0m0.835s 00:07:44.656 ************************************ 00:07:44.656 END TEST raid_state_function_test_sb 00:07:44.656 ************************************ 00:07:44.656 10:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.656 10:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.656 10:36:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:44.656 10:36:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:44.656 10:36:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.656 10:36:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.656 ************************************ 00:07:44.656 START TEST raid_superblock_test 00:07:44.656 ************************************ 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62157 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62157 00:07:44.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62157 ']' 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.656 10:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.913 [2024-11-15 10:36:05.868893] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:07:44.913 [2024-11-15 10:36:05.869065] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62157 ] 00:07:44.913 [2024-11-15 10:36:06.042707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.172 [2024-11-15 10:36:06.176818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.430 [2024-11-15 10:36:06.381511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.430 [2024-11-15 10:36:06.381596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.690 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.690 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:45.690 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:45.690 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:45.690 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:45.690 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:45.690 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:45.690 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:45.690 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:45.690 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:45.690 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:45.690 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.690 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.949 malloc1 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.949 [2024-11-15 10:36:06.892087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:45.949 [2024-11-15 10:36:06.892171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.949 [2024-11-15 10:36:06.892203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:45.949 [2024-11-15 10:36:06.892220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.949 [2024-11-15 10:36:06.895114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.949 [2024-11-15 10:36:06.895162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:45.949 pt1 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.949 malloc2 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.949 [2024-11-15 10:36:06.944356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:45.949 [2024-11-15 10:36:06.944435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.949 [2024-11-15 10:36:06.944481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:45.949 [2024-11-15 10:36:06.944528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.949 [2024-11-15 10:36:06.947424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.949 pt2 00:07:45.949 [2024-11-15 10:36:06.948406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.949 [2024-11-15 10:36:06.952733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:45.949 [2024-11-15 10:36:06.955231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:45.949 [2024-11-15 10:36:06.955640] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:45.949 [2024-11-15 10:36:06.955667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:45.949 [2024-11-15 10:36:06.956003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:45.949 [2024-11-15 10:36:06.956209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:45.949 [2024-11-15 10:36:06.956232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:45.949 [2024-11-15 10:36:06.956424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.949 10:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.950 10:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.950 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.950 "name": "raid_bdev1", 00:07:45.950 "uuid": "69c18a1d-7279-44ca-bb92-2002dfd199ef", 00:07:45.950 "strip_size_kb": 64, 00:07:45.950 "state": "online", 00:07:45.950 "raid_level": "concat", 00:07:45.950 "superblock": true, 00:07:45.950 "num_base_bdevs": 2, 00:07:45.950 "num_base_bdevs_discovered": 2, 00:07:45.950 "num_base_bdevs_operational": 2, 00:07:45.950 "base_bdevs_list": [ 00:07:45.950 { 00:07:45.950 "name": "pt1", 00:07:45.950 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.950 "is_configured": true, 00:07:45.950 "data_offset": 2048, 00:07:45.950 "data_size": 63488 00:07:45.950 }, 00:07:45.950 { 00:07:45.950 "name": "pt2", 00:07:45.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.950 "is_configured": true, 00:07:45.950 "data_offset": 2048, 00:07:45.950 "data_size": 63488 00:07:45.950 } 00:07:45.950 ] 00:07:45.950 }' 00:07:45.950 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.950 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.516 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:46.516 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:46.516 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.516 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.516 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.516 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.516 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.516 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.516 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.516 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.516 [2024-11-15 10:36:07.505206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.516 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.516 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.516 "name": "raid_bdev1", 00:07:46.516 "aliases": [ 00:07:46.516 "69c18a1d-7279-44ca-bb92-2002dfd199ef" 00:07:46.516 ], 00:07:46.516 "product_name": "Raid Volume", 00:07:46.516 "block_size": 512, 00:07:46.516 "num_blocks": 126976, 00:07:46.516 "uuid": "69c18a1d-7279-44ca-bb92-2002dfd199ef", 00:07:46.516 "assigned_rate_limits": { 00:07:46.516 "rw_ios_per_sec": 0, 00:07:46.516 "rw_mbytes_per_sec": 0, 00:07:46.516 "r_mbytes_per_sec": 0, 00:07:46.516 "w_mbytes_per_sec": 0 00:07:46.516 }, 00:07:46.516 "claimed": false, 00:07:46.516 "zoned": false, 00:07:46.516 "supported_io_types": { 00:07:46.516 "read": true, 00:07:46.516 "write": true, 00:07:46.516 "unmap": true, 00:07:46.516 "flush": true, 00:07:46.516 "reset": true, 00:07:46.516 "nvme_admin": false, 00:07:46.516 "nvme_io": false, 00:07:46.516 "nvme_io_md": false, 00:07:46.516 "write_zeroes": true, 00:07:46.516 "zcopy": false, 00:07:46.516 "get_zone_info": false, 00:07:46.516 "zone_management": false, 00:07:46.516 "zone_append": false, 00:07:46.516 "compare": false, 00:07:46.516 "compare_and_write": false, 00:07:46.516 "abort": false, 00:07:46.516 "seek_hole": false, 00:07:46.516 "seek_data": false, 00:07:46.516 "copy": false, 00:07:46.516 "nvme_iov_md": false 00:07:46.516 }, 00:07:46.516 "memory_domains": [ 00:07:46.516 { 00:07:46.516 "dma_device_id": "system", 00:07:46.516 "dma_device_type": 1 00:07:46.516 }, 00:07:46.516 { 00:07:46.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.516 "dma_device_type": 2 00:07:46.516 }, 00:07:46.516 { 00:07:46.516 "dma_device_id": "system", 00:07:46.516 "dma_device_type": 1 00:07:46.516 }, 00:07:46.516 { 00:07:46.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.516 "dma_device_type": 2 00:07:46.516 } 00:07:46.516 ], 00:07:46.516 "driver_specific": { 00:07:46.516 "raid": { 00:07:46.516 "uuid": "69c18a1d-7279-44ca-bb92-2002dfd199ef", 00:07:46.516 "strip_size_kb": 64, 00:07:46.516 "state": "online", 00:07:46.516 "raid_level": "concat", 00:07:46.516 "superblock": true, 00:07:46.516 "num_base_bdevs": 2, 00:07:46.516 "num_base_bdevs_discovered": 2, 00:07:46.516 "num_base_bdevs_operational": 2, 00:07:46.516 "base_bdevs_list": [ 00:07:46.516 { 00:07:46.516 "name": "pt1", 00:07:46.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.516 "is_configured": true, 00:07:46.517 "data_offset": 2048, 00:07:46.517 "data_size": 63488 00:07:46.517 }, 00:07:46.517 { 00:07:46.517 "name": "pt2", 00:07:46.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.517 "is_configured": true, 00:07:46.517 "data_offset": 2048, 00:07:46.517 "data_size": 63488 00:07:46.517 } 00:07:46.517 ] 00:07:46.517 } 00:07:46.517 } 00:07:46.517 }' 00:07:46.517 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.517 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:46.517 pt2' 00:07:46.517 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.517 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.517 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.517 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:46.517 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.517 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.517 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.775 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.775 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.775 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.775 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.776 [2024-11-15 10:36:07.765195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=69c18a1d-7279-44ca-bb92-2002dfd199ef 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 69c18a1d-7279-44ca-bb92-2002dfd199ef ']' 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.776 [2024-11-15 10:36:07.816859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.776 [2024-11-15 10:36:07.816996] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.776 [2024-11-15 10:36:07.817194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.776 [2024-11-15 10:36:07.817374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.776 [2024-11-15 10:36:07.817542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.776 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.034 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:47.034 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:47.034 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:47.034 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:47.034 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:47.034 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:47.034 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:47.034 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:47.034 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:47.034 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.034 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.034 [2024-11-15 10:36:07.944942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:47.034 [2024-11-15 10:36:07.947440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:47.034 [2024-11-15 10:36:07.947678] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:47.034 [2024-11-15 10:36:07.947817] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:47.034 [2024-11-15 10:36:07.947900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.034 [2024-11-15 10:36:07.948029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:47.034 request: 00:07:47.034 { 00:07:47.034 "name": "raid_bdev1", 00:07:47.034 "raid_level": "concat", 00:07:47.034 "base_bdevs": [ 00:07:47.034 "malloc1", 00:07:47.034 "malloc2" 00:07:47.034 ], 00:07:47.034 "strip_size_kb": 64, 00:07:47.034 "superblock": false, 00:07:47.034 "method": "bdev_raid_create", 00:07:47.034 "req_id": 1 00:07:47.034 } 00:07:47.034 Got JSON-RPC error response 00:07:47.034 response: 00:07:47.034 { 00:07:47.035 "code": -17, 00:07:47.035 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:47.035 } 00:07:47.035 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:47.035 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:47.035 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:47.035 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:47.035 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:47.035 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.035 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.035 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.035 10:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:47.035 10:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.035 [2024-11-15 10:36:08.016984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:47.035 [2024-11-15 10:36:08.017074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.035 [2024-11-15 10:36:08.017106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:47.035 [2024-11-15 10:36:08.017126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.035 [2024-11-15 10:36:08.020108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.035 [2024-11-15 10:36:08.020163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:47.035 [2024-11-15 10:36:08.020286] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:47.035 [2024-11-15 10:36:08.020369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:47.035 pt1 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.035 "name": "raid_bdev1", 00:07:47.035 "uuid": "69c18a1d-7279-44ca-bb92-2002dfd199ef", 00:07:47.035 "strip_size_kb": 64, 00:07:47.035 "state": "configuring", 00:07:47.035 "raid_level": "concat", 00:07:47.035 "superblock": true, 00:07:47.035 "num_base_bdevs": 2, 00:07:47.035 "num_base_bdevs_discovered": 1, 00:07:47.035 "num_base_bdevs_operational": 2, 00:07:47.035 "base_bdevs_list": [ 00:07:47.035 { 00:07:47.035 "name": "pt1", 00:07:47.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.035 "is_configured": true, 00:07:47.035 "data_offset": 2048, 00:07:47.035 "data_size": 63488 00:07:47.035 }, 00:07:47.035 { 00:07:47.035 "name": null, 00:07:47.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.035 "is_configured": false, 00:07:47.035 "data_offset": 2048, 00:07:47.035 "data_size": 63488 00:07:47.035 } 00:07:47.035 ] 00:07:47.035 }' 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.035 10:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.652 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:47.652 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:47.652 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:47.652 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:47.652 10:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.652 10:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.652 [2024-11-15 10:36:08.533101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:47.652 [2024-11-15 10:36:08.533192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.652 [2024-11-15 10:36:08.533233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:47.652 [2024-11-15 10:36:08.533251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.652 [2024-11-15 10:36:08.533858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.652 [2024-11-15 10:36:08.533898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:47.652 [2024-11-15 10:36:08.534002] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:47.652 [2024-11-15 10:36:08.534038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:47.652 [2024-11-15 10:36:08.534179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:47.652 [2024-11-15 10:36:08.534200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:47.652 [2024-11-15 10:36:08.534515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:47.652 [2024-11-15 10:36:08.534702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:47.652 [2024-11-15 10:36:08.534719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:47.652 [2024-11-15 10:36:08.534887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.652 pt2 00:07:47.652 10:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.652 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:47.652 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.653 "name": "raid_bdev1", 00:07:47.653 "uuid": "69c18a1d-7279-44ca-bb92-2002dfd199ef", 00:07:47.653 "strip_size_kb": 64, 00:07:47.653 "state": "online", 00:07:47.653 "raid_level": "concat", 00:07:47.653 "superblock": true, 00:07:47.653 "num_base_bdevs": 2, 00:07:47.653 "num_base_bdevs_discovered": 2, 00:07:47.653 "num_base_bdevs_operational": 2, 00:07:47.653 "base_bdevs_list": [ 00:07:47.653 { 00:07:47.653 "name": "pt1", 00:07:47.653 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.653 "is_configured": true, 00:07:47.653 "data_offset": 2048, 00:07:47.653 "data_size": 63488 00:07:47.653 }, 00:07:47.653 { 00:07:47.653 "name": "pt2", 00:07:47.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.653 "is_configured": true, 00:07:47.653 "data_offset": 2048, 00:07:47.653 "data_size": 63488 00:07:47.653 } 00:07:47.653 ] 00:07:47.653 }' 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.653 10:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.911 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:47.911 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:47.911 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.911 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.911 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.911 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.911 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:47.911 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.911 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.911 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:48.169 [2024-11-15 10:36:09.073556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.169 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.169 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:48.169 "name": "raid_bdev1", 00:07:48.169 "aliases": [ 00:07:48.169 "69c18a1d-7279-44ca-bb92-2002dfd199ef" 00:07:48.169 ], 00:07:48.169 "product_name": "Raid Volume", 00:07:48.169 "block_size": 512, 00:07:48.169 "num_blocks": 126976, 00:07:48.169 "uuid": "69c18a1d-7279-44ca-bb92-2002dfd199ef", 00:07:48.169 "assigned_rate_limits": { 00:07:48.169 "rw_ios_per_sec": 0, 00:07:48.169 "rw_mbytes_per_sec": 0, 00:07:48.169 "r_mbytes_per_sec": 0, 00:07:48.169 "w_mbytes_per_sec": 0 00:07:48.169 }, 00:07:48.169 "claimed": false, 00:07:48.169 "zoned": false, 00:07:48.169 "supported_io_types": { 00:07:48.169 "read": true, 00:07:48.169 "write": true, 00:07:48.169 "unmap": true, 00:07:48.169 "flush": true, 00:07:48.169 "reset": true, 00:07:48.169 "nvme_admin": false, 00:07:48.169 "nvme_io": false, 00:07:48.169 "nvme_io_md": false, 00:07:48.169 "write_zeroes": true, 00:07:48.169 "zcopy": false, 00:07:48.169 "get_zone_info": false, 00:07:48.169 "zone_management": false, 00:07:48.169 "zone_append": false, 00:07:48.169 "compare": false, 00:07:48.169 "compare_and_write": false, 00:07:48.169 "abort": false, 00:07:48.169 "seek_hole": false, 00:07:48.169 "seek_data": false, 00:07:48.169 "copy": false, 00:07:48.169 "nvme_iov_md": false 00:07:48.169 }, 00:07:48.169 "memory_domains": [ 00:07:48.169 { 00:07:48.169 "dma_device_id": "system", 00:07:48.169 "dma_device_type": 1 00:07:48.169 }, 00:07:48.169 { 00:07:48.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.169 "dma_device_type": 2 00:07:48.169 }, 00:07:48.169 { 00:07:48.169 "dma_device_id": "system", 00:07:48.169 "dma_device_type": 1 00:07:48.169 }, 00:07:48.169 { 00:07:48.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.169 "dma_device_type": 2 00:07:48.169 } 00:07:48.169 ], 00:07:48.169 "driver_specific": { 00:07:48.169 "raid": { 00:07:48.169 "uuid": "69c18a1d-7279-44ca-bb92-2002dfd199ef", 00:07:48.169 "strip_size_kb": 64, 00:07:48.169 "state": "online", 00:07:48.169 "raid_level": "concat", 00:07:48.169 "superblock": true, 00:07:48.169 "num_base_bdevs": 2, 00:07:48.169 "num_base_bdevs_discovered": 2, 00:07:48.169 "num_base_bdevs_operational": 2, 00:07:48.169 "base_bdevs_list": [ 00:07:48.169 { 00:07:48.169 "name": "pt1", 00:07:48.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.169 "is_configured": true, 00:07:48.169 "data_offset": 2048, 00:07:48.169 "data_size": 63488 00:07:48.169 }, 00:07:48.169 { 00:07:48.169 "name": "pt2", 00:07:48.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.169 "is_configured": true, 00:07:48.169 "data_offset": 2048, 00:07:48.169 "data_size": 63488 00:07:48.169 } 00:07:48.169 ] 00:07:48.169 } 00:07:48.169 } 00:07:48.169 }' 00:07:48.169 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:48.169 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:48.169 pt2' 00:07:48.169 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.169 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.169 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.169 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:48.170 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.170 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.170 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.170 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.170 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.170 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.170 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.170 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:48.170 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.170 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.170 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.170 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.428 [2024-11-15 10:36:09.337600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 69c18a1d-7279-44ca-bb92-2002dfd199ef '!=' 69c18a1d-7279-44ca-bb92-2002dfd199ef ']' 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62157 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62157 ']' 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62157 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62157 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62157' 00:07:48.428 killing process with pid 62157 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62157 00:07:48.428 [2024-11-15 10:36:09.414867] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.428 10:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62157 00:07:48.428 [2024-11-15 10:36:09.415142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.428 [2024-11-15 10:36:09.415343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.428 [2024-11-15 10:36:09.415503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:48.687 [2024-11-15 10:36:09.601298] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.624 10:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:49.624 ************************************ 00:07:49.624 END TEST raid_superblock_test 00:07:49.624 ************************************ 00:07:49.624 00:07:49.624 real 0m4.850s 00:07:49.624 user 0m7.164s 00:07:49.624 sys 0m0.716s 00:07:49.624 10:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.624 10:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.624 10:36:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:49.624 10:36:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:49.624 10:36:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.624 10:36:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.624 ************************************ 00:07:49.624 START TEST raid_read_error_test 00:07:49.624 ************************************ 00:07:49.624 10:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:49.624 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:49.624 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:49.624 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:49.624 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:49.624 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.624 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:49.624 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.624 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jmYlyMQlaF 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62374 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62374 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62374 ']' 00:07:49.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.625 10:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.882 [2024-11-15 10:36:10.799524] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:07:49.882 [2024-11-15 10:36:10.799926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62374 ] 00:07:49.882 [2024-11-15 10:36:10.991206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.140 [2024-11-15 10:36:11.152059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.398 [2024-11-15 10:36:11.373720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.398 [2024-11-15 10:36:11.373801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.965 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.965 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:50.965 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:50.965 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:50.965 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.965 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.965 BaseBdev1_malloc 00:07:50.965 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.965 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:50.965 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.965 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.965 true 00:07:50.965 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.965 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:50.965 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.965 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.965 [2024-11-15 10:36:11.886515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:50.966 [2024-11-15 10:36:11.886583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.966 [2024-11-15 10:36:11.886613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:50.966 [2024-11-15 10:36:11.886641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.966 [2024-11-15 10:36:11.889522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.966 [2024-11-15 10:36:11.889570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:50.966 BaseBdev1 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.966 BaseBdev2_malloc 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.966 true 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.966 [2024-11-15 10:36:11.950464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:50.966 [2024-11-15 10:36:11.950545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.966 [2024-11-15 10:36:11.950569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:50.966 [2024-11-15 10:36:11.950586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.966 [2024-11-15 10:36:11.953361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.966 [2024-11-15 10:36:11.953540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:50.966 BaseBdev2 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.966 [2024-11-15 10:36:11.958554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.966 [2024-11-15 10:36:11.961057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.966 [2024-11-15 10:36:11.961447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.966 [2024-11-15 10:36:11.961477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:50.966 [2024-11-15 10:36:11.961794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:50.966 [2024-11-15 10:36:11.962025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.966 [2024-11-15 10:36:11.962045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:50.966 [2024-11-15 10:36:11.962232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.966 10:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.966 10:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.966 "name": "raid_bdev1", 00:07:50.966 "uuid": "66128088-a459-4e08-8242-df8869f77e67", 00:07:50.966 "strip_size_kb": 64, 00:07:50.966 "state": "online", 00:07:50.966 "raid_level": "concat", 00:07:50.966 "superblock": true, 00:07:50.966 "num_base_bdevs": 2, 00:07:50.966 "num_base_bdevs_discovered": 2, 00:07:50.966 "num_base_bdevs_operational": 2, 00:07:50.966 "base_bdevs_list": [ 00:07:50.966 { 00:07:50.966 "name": "BaseBdev1", 00:07:50.966 "uuid": "71851f36-6365-5aa0-8ba0-ee88349aa683", 00:07:50.966 "is_configured": true, 00:07:50.966 "data_offset": 2048, 00:07:50.966 "data_size": 63488 00:07:50.966 }, 00:07:50.966 { 00:07:50.966 "name": "BaseBdev2", 00:07:50.966 "uuid": "c2e2dff5-a4fc-57b1-860e-d238555ce25e", 00:07:50.966 "is_configured": true, 00:07:50.966 "data_offset": 2048, 00:07:50.966 "data_size": 63488 00:07:50.966 } 00:07:50.966 ] 00:07:50.966 }' 00:07:50.966 10:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.966 10:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.577 10:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:51.577 10:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:51.577 [2024-11-15 10:36:12.552129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.513 "name": "raid_bdev1", 00:07:52.513 "uuid": "66128088-a459-4e08-8242-df8869f77e67", 00:07:52.513 "strip_size_kb": 64, 00:07:52.513 "state": "online", 00:07:52.513 "raid_level": "concat", 00:07:52.513 "superblock": true, 00:07:52.513 "num_base_bdevs": 2, 00:07:52.513 "num_base_bdevs_discovered": 2, 00:07:52.513 "num_base_bdevs_operational": 2, 00:07:52.513 "base_bdevs_list": [ 00:07:52.513 { 00:07:52.513 "name": "BaseBdev1", 00:07:52.513 "uuid": "71851f36-6365-5aa0-8ba0-ee88349aa683", 00:07:52.513 "is_configured": true, 00:07:52.513 "data_offset": 2048, 00:07:52.513 "data_size": 63488 00:07:52.513 }, 00:07:52.513 { 00:07:52.513 "name": "BaseBdev2", 00:07:52.513 "uuid": "c2e2dff5-a4fc-57b1-860e-d238555ce25e", 00:07:52.513 "is_configured": true, 00:07:52.513 "data_offset": 2048, 00:07:52.513 "data_size": 63488 00:07:52.513 } 00:07:52.513 ] 00:07:52.513 }' 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.513 10:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.080 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:53.080 10:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.080 10:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.080 [2024-11-15 10:36:13.994897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:53.080 [2024-11-15 10:36:13.994938] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.080 [2024-11-15 10:36:13.998271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.080 [2024-11-15 10:36:13.998478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.080 [2024-11-15 10:36:13.998552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.080 [2024-11-15 10:36:13.998576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:53.080 { 00:07:53.080 "results": [ 00:07:53.080 { 00:07:53.080 "job": "raid_bdev1", 00:07:53.080 "core_mask": "0x1", 00:07:53.080 "workload": "randrw", 00:07:53.080 "percentage": 50, 00:07:53.080 "status": "finished", 00:07:53.080 "queue_depth": 1, 00:07:53.080 "io_size": 131072, 00:07:53.080 "runtime": 1.44009, 00:07:53.080 "iops": 10985.424522078481, 00:07:53.080 "mibps": 1373.1780652598102, 00:07:53.080 "io_failed": 1, 00:07:53.080 "io_timeout": 0, 00:07:53.080 "avg_latency_us": 127.00922111577823, 00:07:53.081 "min_latency_us": 42.82181818181818, 00:07:53.081 "max_latency_us": 1869.2654545454545 00:07:53.081 } 00:07:53.081 ], 00:07:53.081 "core_count": 1 00:07:53.081 } 00:07:53.081 10:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.081 10:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62374 00:07:53.081 10:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62374 ']' 00:07:53.081 10:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62374 00:07:53.081 10:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:53.081 10:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.081 10:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62374 00:07:53.081 killing process with pid 62374 00:07:53.081 10:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.081 10:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.081 10:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62374' 00:07:53.081 10:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62374 00:07:53.081 [2024-11-15 10:36:14.031010] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.081 10:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62374 00:07:53.081 [2024-11-15 10:36:14.164601] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.454 10:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jmYlyMQlaF 00:07:54.454 10:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:54.454 10:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:54.454 ************************************ 00:07:54.454 END TEST raid_read_error_test 00:07:54.454 ************************************ 00:07:54.454 10:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:07:54.454 10:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:54.454 10:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:54.454 10:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:54.454 10:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:07:54.454 00:07:54.454 real 0m4.613s 00:07:54.454 user 0m5.742s 00:07:54.454 sys 0m0.590s 00:07:54.454 10:36:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.454 10:36:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.454 10:36:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:54.454 10:36:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:54.454 10:36:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.454 10:36:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.454 ************************************ 00:07:54.454 START TEST raid_write_error_test 00:07:54.454 ************************************ 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8GhcFMDEPQ 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62520 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62520 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62520 ']' 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.454 10:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.454 [2024-11-15 10:36:15.486918] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:07:54.454 [2024-11-15 10:36:15.487160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62520 ] 00:07:54.713 [2024-11-15 10:36:15.670168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.713 [2024-11-15 10:36:15.835194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.971 [2024-11-15 10:36:16.045230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.971 [2024-11-15 10:36:16.046309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.536 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.536 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:55.536 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:55.536 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:55.536 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.536 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.536 BaseBdev1_malloc 00:07:55.536 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.536 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:55.536 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.536 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.536 true 00:07:55.536 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.536 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:55.536 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.537 [2024-11-15 10:36:16.543722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:55.537 [2024-11-15 10:36:16.543929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.537 [2024-11-15 10:36:16.543971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:55.537 [2024-11-15 10:36:16.543991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.537 [2024-11-15 10:36:16.546844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.537 [2024-11-15 10:36:16.546899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:55.537 BaseBdev1 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.537 BaseBdev2_malloc 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.537 true 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.537 [2024-11-15 10:36:16.599873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:55.537 [2024-11-15 10:36:16.599945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.537 [2024-11-15 10:36:16.599972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:55.537 [2024-11-15 10:36:16.599989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.537 [2024-11-15 10:36:16.602757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.537 [2024-11-15 10:36:16.602815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:55.537 BaseBdev2 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.537 [2024-11-15 10:36:16.607958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.537 [2024-11-15 10:36:16.610357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.537 [2024-11-15 10:36:16.610779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:55.537 [2024-11-15 10:36:16.610810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:55.537 [2024-11-15 10:36:16.611105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:55.537 [2024-11-15 10:36:16.611340] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:55.537 [2024-11-15 10:36:16.611360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:55.537 [2024-11-15 10:36:16.611569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.537 "name": "raid_bdev1", 00:07:55.537 "uuid": "e988fbd1-3d16-48cf-abdf-a22a4aa64231", 00:07:55.537 "strip_size_kb": 64, 00:07:55.537 "state": "online", 00:07:55.537 "raid_level": "concat", 00:07:55.537 "superblock": true, 00:07:55.537 "num_base_bdevs": 2, 00:07:55.537 "num_base_bdevs_discovered": 2, 00:07:55.537 "num_base_bdevs_operational": 2, 00:07:55.537 "base_bdevs_list": [ 00:07:55.537 { 00:07:55.537 "name": "BaseBdev1", 00:07:55.537 "uuid": "88d00a36-9e46-57ca-901b-e5b6dba1f208", 00:07:55.537 "is_configured": true, 00:07:55.537 "data_offset": 2048, 00:07:55.537 "data_size": 63488 00:07:55.537 }, 00:07:55.537 { 00:07:55.537 "name": "BaseBdev2", 00:07:55.537 "uuid": "1f003b27-fa7b-53dc-aa39-c460c0ff106e", 00:07:55.537 "is_configured": true, 00:07:55.537 "data_offset": 2048, 00:07:55.537 "data_size": 63488 00:07:55.537 } 00:07:55.537 ] 00:07:55.537 }' 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.537 10:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.103 10:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:56.103 10:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:56.361 [2024-11-15 10:36:17.285600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.295 "name": "raid_bdev1", 00:07:57.295 "uuid": "e988fbd1-3d16-48cf-abdf-a22a4aa64231", 00:07:57.295 "strip_size_kb": 64, 00:07:57.295 "state": "online", 00:07:57.295 "raid_level": "concat", 00:07:57.295 "superblock": true, 00:07:57.295 "num_base_bdevs": 2, 00:07:57.295 "num_base_bdevs_discovered": 2, 00:07:57.295 "num_base_bdevs_operational": 2, 00:07:57.295 "base_bdevs_list": [ 00:07:57.295 { 00:07:57.295 "name": "BaseBdev1", 00:07:57.295 "uuid": "88d00a36-9e46-57ca-901b-e5b6dba1f208", 00:07:57.295 "is_configured": true, 00:07:57.295 "data_offset": 2048, 00:07:57.295 "data_size": 63488 00:07:57.295 }, 00:07:57.295 { 00:07:57.295 "name": "BaseBdev2", 00:07:57.295 "uuid": "1f003b27-fa7b-53dc-aa39-c460c0ff106e", 00:07:57.295 "is_configured": true, 00:07:57.295 "data_offset": 2048, 00:07:57.295 "data_size": 63488 00:07:57.295 } 00:07:57.295 ] 00:07:57.295 }' 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.295 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.552 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.552 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.552 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.552 [2024-11-15 10:36:18.673161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.552 [2024-11-15 10:36:18.673347] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.552 [2024-11-15 10:36:18.676859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.552 [2024-11-15 10:36:18.677039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.552 [2024-11-15 10:36:18.677211] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.552 [2024-11-15 10:36:18.677374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:57.552 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.552 { 00:07:57.552 "results": [ 00:07:57.552 { 00:07:57.552 "job": "raid_bdev1", 00:07:57.552 "core_mask": "0x1", 00:07:57.552 "workload": "randrw", 00:07:57.552 "percentage": 50, 00:07:57.552 "status": "finished", 00:07:57.552 "queue_depth": 1, 00:07:57.552 "io_size": 131072, 00:07:57.552 "runtime": 1.38494, 00:07:57.552 "iops": 10585.296113911072, 00:07:57.552 "mibps": 1323.162014238884, 00:07:57.552 "io_failed": 1, 00:07:57.552 "io_timeout": 0, 00:07:57.552 "avg_latency_us": 132.04663851529412, 00:07:57.552 "min_latency_us": 43.52, 00:07:57.552 "max_latency_us": 1869.2654545454545 00:07:57.552 } 00:07:57.552 ], 00:07:57.552 "core_count": 1 00:07:57.552 } 00:07:57.552 10:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62520 00:07:57.552 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62520 ']' 00:07:57.552 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62520 00:07:57.552 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:57.552 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.552 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62520 00:07:57.814 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.814 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.814 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62520' 00:07:57.814 killing process with pid 62520 00:07:57.814 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62520 00:07:57.814 [2024-11-15 10:36:18.714939] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.814 10:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62520 00:07:57.814 [2024-11-15 10:36:18.835876] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.771 10:36:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:58.771 10:36:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8GhcFMDEPQ 00:07:58.771 10:36:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:58.771 10:36:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:58.771 10:36:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:58.771 ************************************ 00:07:58.771 END TEST raid_write_error_test 00:07:58.771 ************************************ 00:07:58.771 10:36:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.771 10:36:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:58.771 10:36:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:58.771 00:07:58.771 real 0m4.582s 00:07:58.771 user 0m5.809s 00:07:58.771 sys 0m0.559s 00:07:58.771 10:36:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.771 10:36:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.029 10:36:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:59.029 10:36:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:59.029 10:36:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:59.029 10:36:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.029 10:36:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.029 ************************************ 00:07:59.029 START TEST raid_state_function_test 00:07:59.029 ************************************ 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:59.029 Process raid pid: 62658 00:07:59.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.029 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62658 00:07:59.030 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62658' 00:07:59.030 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62658 00:07:59.030 10:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:59.030 10:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62658 ']' 00:07:59.030 10:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.030 10:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.030 10:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.030 10:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.030 10:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.030 [2024-11-15 10:36:20.069213] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:07:59.030 [2024-11-15 10:36:20.069366] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.287 [2024-11-15 10:36:20.247375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.287 [2024-11-15 10:36:20.381891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.548 [2024-11-15 10:36:20.590960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.548 [2024-11-15 10:36:20.591228] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.115 [2024-11-15 10:36:21.091635] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.115 [2024-11-15 10:36:21.091703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.115 [2024-11-15 10:36:21.091721] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.115 [2024-11-15 10:36:21.091737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.115 "name": "Existed_Raid", 00:08:00.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.115 "strip_size_kb": 0, 00:08:00.115 "state": "configuring", 00:08:00.115 "raid_level": "raid1", 00:08:00.115 "superblock": false, 00:08:00.115 "num_base_bdevs": 2, 00:08:00.115 "num_base_bdevs_discovered": 0, 00:08:00.115 "num_base_bdevs_operational": 2, 00:08:00.115 "base_bdevs_list": [ 00:08:00.115 { 00:08:00.115 "name": "BaseBdev1", 00:08:00.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.115 "is_configured": false, 00:08:00.115 "data_offset": 0, 00:08:00.115 "data_size": 0 00:08:00.115 }, 00:08:00.115 { 00:08:00.115 "name": "BaseBdev2", 00:08:00.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.115 "is_configured": false, 00:08:00.115 "data_offset": 0, 00:08:00.115 "data_size": 0 00:08:00.115 } 00:08:00.115 ] 00:08:00.115 }' 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.115 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.681 [2024-11-15 10:36:21.603740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.681 [2024-11-15 10:36:21.603786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.681 [2024-11-15 10:36:21.611701] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.681 [2024-11-15 10:36:21.611758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.681 [2024-11-15 10:36:21.611777] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.681 [2024-11-15 10:36:21.611807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.681 [2024-11-15 10:36:21.656813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.681 BaseBdev1 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.681 [ 00:08:00.681 { 00:08:00.681 "name": "BaseBdev1", 00:08:00.681 "aliases": [ 00:08:00.681 "bbb84cdb-66cf-4e1f-a7d4-59ddebab1f23" 00:08:00.681 ], 00:08:00.681 "product_name": "Malloc disk", 00:08:00.681 "block_size": 512, 00:08:00.681 "num_blocks": 65536, 00:08:00.681 "uuid": "bbb84cdb-66cf-4e1f-a7d4-59ddebab1f23", 00:08:00.681 "assigned_rate_limits": { 00:08:00.681 "rw_ios_per_sec": 0, 00:08:00.681 "rw_mbytes_per_sec": 0, 00:08:00.681 "r_mbytes_per_sec": 0, 00:08:00.681 "w_mbytes_per_sec": 0 00:08:00.681 }, 00:08:00.681 "claimed": true, 00:08:00.681 "claim_type": "exclusive_write", 00:08:00.681 "zoned": false, 00:08:00.681 "supported_io_types": { 00:08:00.681 "read": true, 00:08:00.681 "write": true, 00:08:00.681 "unmap": true, 00:08:00.681 "flush": true, 00:08:00.681 "reset": true, 00:08:00.681 "nvme_admin": false, 00:08:00.681 "nvme_io": false, 00:08:00.681 "nvme_io_md": false, 00:08:00.681 "write_zeroes": true, 00:08:00.681 "zcopy": true, 00:08:00.681 "get_zone_info": false, 00:08:00.681 "zone_management": false, 00:08:00.681 "zone_append": false, 00:08:00.681 "compare": false, 00:08:00.681 "compare_and_write": false, 00:08:00.681 "abort": true, 00:08:00.681 "seek_hole": false, 00:08:00.681 "seek_data": false, 00:08:00.681 "copy": true, 00:08:00.681 "nvme_iov_md": false 00:08:00.681 }, 00:08:00.681 "memory_domains": [ 00:08:00.681 { 00:08:00.681 "dma_device_id": "system", 00:08:00.681 "dma_device_type": 1 00:08:00.681 }, 00:08:00.681 { 00:08:00.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.681 "dma_device_type": 2 00:08:00.681 } 00:08:00.681 ], 00:08:00.681 "driver_specific": {} 00:08:00.681 } 00:08:00.681 ] 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.681 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.682 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.682 "name": "Existed_Raid", 00:08:00.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.682 "strip_size_kb": 0, 00:08:00.682 "state": "configuring", 00:08:00.682 "raid_level": "raid1", 00:08:00.682 "superblock": false, 00:08:00.682 "num_base_bdevs": 2, 00:08:00.682 "num_base_bdevs_discovered": 1, 00:08:00.682 "num_base_bdevs_operational": 2, 00:08:00.682 "base_bdevs_list": [ 00:08:00.682 { 00:08:00.682 "name": "BaseBdev1", 00:08:00.682 "uuid": "bbb84cdb-66cf-4e1f-a7d4-59ddebab1f23", 00:08:00.682 "is_configured": true, 00:08:00.682 "data_offset": 0, 00:08:00.682 "data_size": 65536 00:08:00.682 }, 00:08:00.682 { 00:08:00.682 "name": "BaseBdev2", 00:08:00.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.682 "is_configured": false, 00:08:00.682 "data_offset": 0, 00:08:00.682 "data_size": 0 00:08:00.682 } 00:08:00.682 ] 00:08:00.682 }' 00:08:00.682 10:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.682 10:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.304 [2024-11-15 10:36:22.189000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.304 [2024-11-15 10:36:22.189064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.304 [2024-11-15 10:36:22.201053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.304 [2024-11-15 10:36:22.203653] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.304 [2024-11-15 10:36:22.203838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.304 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.304 "name": "Existed_Raid", 00:08:01.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.304 "strip_size_kb": 0, 00:08:01.305 "state": "configuring", 00:08:01.305 "raid_level": "raid1", 00:08:01.305 "superblock": false, 00:08:01.305 "num_base_bdevs": 2, 00:08:01.305 "num_base_bdevs_discovered": 1, 00:08:01.305 "num_base_bdevs_operational": 2, 00:08:01.305 "base_bdevs_list": [ 00:08:01.305 { 00:08:01.305 "name": "BaseBdev1", 00:08:01.305 "uuid": "bbb84cdb-66cf-4e1f-a7d4-59ddebab1f23", 00:08:01.305 "is_configured": true, 00:08:01.305 "data_offset": 0, 00:08:01.305 "data_size": 65536 00:08:01.305 }, 00:08:01.305 { 00:08:01.305 "name": "BaseBdev2", 00:08:01.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.305 "is_configured": false, 00:08:01.305 "data_offset": 0, 00:08:01.305 "data_size": 0 00:08:01.305 } 00:08:01.305 ] 00:08:01.305 }' 00:08:01.305 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.305 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.563 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:01.564 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.564 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.822 [2024-11-15 10:36:22.732363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.822 [2024-11-15 10:36:22.732444] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:01.822 [2024-11-15 10:36:22.732457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:01.822 [2024-11-15 10:36:22.732856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:01.822 [2024-11-15 10:36:22.733111] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:01.822 [2024-11-15 10:36:22.733144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:01.822 [2024-11-15 10:36:22.733509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.822 BaseBdev2 00:08:01.822 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.822 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:01.822 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:01.822 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.822 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:01.822 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.822 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.822 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.822 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.822 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.822 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.822 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:01.822 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.822 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.822 [ 00:08:01.822 { 00:08:01.822 "name": "BaseBdev2", 00:08:01.822 "aliases": [ 00:08:01.822 "30ebe219-756d-4a13-a4c1-4d1fd26a2d72" 00:08:01.822 ], 00:08:01.822 "product_name": "Malloc disk", 00:08:01.822 "block_size": 512, 00:08:01.822 "num_blocks": 65536, 00:08:01.822 "uuid": "30ebe219-756d-4a13-a4c1-4d1fd26a2d72", 00:08:01.822 "assigned_rate_limits": { 00:08:01.822 "rw_ios_per_sec": 0, 00:08:01.822 "rw_mbytes_per_sec": 0, 00:08:01.822 "r_mbytes_per_sec": 0, 00:08:01.822 "w_mbytes_per_sec": 0 00:08:01.822 }, 00:08:01.822 "claimed": true, 00:08:01.822 "claim_type": "exclusive_write", 00:08:01.822 "zoned": false, 00:08:01.822 "supported_io_types": { 00:08:01.822 "read": true, 00:08:01.822 "write": true, 00:08:01.822 "unmap": true, 00:08:01.822 "flush": true, 00:08:01.822 "reset": true, 00:08:01.822 "nvme_admin": false, 00:08:01.822 "nvme_io": false, 00:08:01.822 "nvme_io_md": false, 00:08:01.822 "write_zeroes": true, 00:08:01.822 "zcopy": true, 00:08:01.822 "get_zone_info": false, 00:08:01.822 "zone_management": false, 00:08:01.823 "zone_append": false, 00:08:01.823 "compare": false, 00:08:01.823 "compare_and_write": false, 00:08:01.823 "abort": true, 00:08:01.823 "seek_hole": false, 00:08:01.823 "seek_data": false, 00:08:01.823 "copy": true, 00:08:01.823 "nvme_iov_md": false 00:08:01.823 }, 00:08:01.823 "memory_domains": [ 00:08:01.823 { 00:08:01.823 "dma_device_id": "system", 00:08:01.823 "dma_device_type": 1 00:08:01.823 }, 00:08:01.823 { 00:08:01.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.823 "dma_device_type": 2 00:08:01.823 } 00:08:01.823 ], 00:08:01.823 "driver_specific": {} 00:08:01.823 } 00:08:01.823 ] 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.823 "name": "Existed_Raid", 00:08:01.823 "uuid": "9049bb53-ed60-4b3e-bae6-64c73e834cf2", 00:08:01.823 "strip_size_kb": 0, 00:08:01.823 "state": "online", 00:08:01.823 "raid_level": "raid1", 00:08:01.823 "superblock": false, 00:08:01.823 "num_base_bdevs": 2, 00:08:01.823 "num_base_bdevs_discovered": 2, 00:08:01.823 "num_base_bdevs_operational": 2, 00:08:01.823 "base_bdevs_list": [ 00:08:01.823 { 00:08:01.823 "name": "BaseBdev1", 00:08:01.823 "uuid": "bbb84cdb-66cf-4e1f-a7d4-59ddebab1f23", 00:08:01.823 "is_configured": true, 00:08:01.823 "data_offset": 0, 00:08:01.823 "data_size": 65536 00:08:01.823 }, 00:08:01.823 { 00:08:01.823 "name": "BaseBdev2", 00:08:01.823 "uuid": "30ebe219-756d-4a13-a4c1-4d1fd26a2d72", 00:08:01.823 "is_configured": true, 00:08:01.823 "data_offset": 0, 00:08:01.823 "data_size": 65536 00:08:01.823 } 00:08:01.823 ] 00:08:01.823 }' 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.823 10:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.389 [2024-11-15 10:36:23.268944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.389 "name": "Existed_Raid", 00:08:02.389 "aliases": [ 00:08:02.389 "9049bb53-ed60-4b3e-bae6-64c73e834cf2" 00:08:02.389 ], 00:08:02.389 "product_name": "Raid Volume", 00:08:02.389 "block_size": 512, 00:08:02.389 "num_blocks": 65536, 00:08:02.389 "uuid": "9049bb53-ed60-4b3e-bae6-64c73e834cf2", 00:08:02.389 "assigned_rate_limits": { 00:08:02.389 "rw_ios_per_sec": 0, 00:08:02.389 "rw_mbytes_per_sec": 0, 00:08:02.389 "r_mbytes_per_sec": 0, 00:08:02.389 "w_mbytes_per_sec": 0 00:08:02.389 }, 00:08:02.389 "claimed": false, 00:08:02.389 "zoned": false, 00:08:02.389 "supported_io_types": { 00:08:02.389 "read": true, 00:08:02.389 "write": true, 00:08:02.389 "unmap": false, 00:08:02.389 "flush": false, 00:08:02.389 "reset": true, 00:08:02.389 "nvme_admin": false, 00:08:02.389 "nvme_io": false, 00:08:02.389 "nvme_io_md": false, 00:08:02.389 "write_zeroes": true, 00:08:02.389 "zcopy": false, 00:08:02.389 "get_zone_info": false, 00:08:02.389 "zone_management": false, 00:08:02.389 "zone_append": false, 00:08:02.389 "compare": false, 00:08:02.389 "compare_and_write": false, 00:08:02.389 "abort": false, 00:08:02.389 "seek_hole": false, 00:08:02.389 "seek_data": false, 00:08:02.389 "copy": false, 00:08:02.389 "nvme_iov_md": false 00:08:02.389 }, 00:08:02.389 "memory_domains": [ 00:08:02.389 { 00:08:02.389 "dma_device_id": "system", 00:08:02.389 "dma_device_type": 1 00:08:02.389 }, 00:08:02.389 { 00:08:02.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.389 "dma_device_type": 2 00:08:02.389 }, 00:08:02.389 { 00:08:02.389 "dma_device_id": "system", 00:08:02.389 "dma_device_type": 1 00:08:02.389 }, 00:08:02.389 { 00:08:02.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.389 "dma_device_type": 2 00:08:02.389 } 00:08:02.389 ], 00:08:02.389 "driver_specific": { 00:08:02.389 "raid": { 00:08:02.389 "uuid": "9049bb53-ed60-4b3e-bae6-64c73e834cf2", 00:08:02.389 "strip_size_kb": 0, 00:08:02.389 "state": "online", 00:08:02.389 "raid_level": "raid1", 00:08:02.389 "superblock": false, 00:08:02.389 "num_base_bdevs": 2, 00:08:02.389 "num_base_bdevs_discovered": 2, 00:08:02.389 "num_base_bdevs_operational": 2, 00:08:02.389 "base_bdevs_list": [ 00:08:02.389 { 00:08:02.389 "name": "BaseBdev1", 00:08:02.389 "uuid": "bbb84cdb-66cf-4e1f-a7d4-59ddebab1f23", 00:08:02.389 "is_configured": true, 00:08:02.389 "data_offset": 0, 00:08:02.389 "data_size": 65536 00:08:02.389 }, 00:08:02.389 { 00:08:02.389 "name": "BaseBdev2", 00:08:02.389 "uuid": "30ebe219-756d-4a13-a4c1-4d1fd26a2d72", 00:08:02.389 "is_configured": true, 00:08:02.389 "data_offset": 0, 00:08:02.389 "data_size": 65536 00:08:02.389 } 00:08:02.389 ] 00:08:02.389 } 00:08:02.389 } 00:08:02.389 }' 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:02.389 BaseBdev2' 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.389 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.389 [2024-11-15 10:36:23.524705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.648 "name": "Existed_Raid", 00:08:02.648 "uuid": "9049bb53-ed60-4b3e-bae6-64c73e834cf2", 00:08:02.648 "strip_size_kb": 0, 00:08:02.648 "state": "online", 00:08:02.648 "raid_level": "raid1", 00:08:02.648 "superblock": false, 00:08:02.648 "num_base_bdevs": 2, 00:08:02.648 "num_base_bdevs_discovered": 1, 00:08:02.648 "num_base_bdevs_operational": 1, 00:08:02.648 "base_bdevs_list": [ 00:08:02.648 { 00:08:02.648 "name": null, 00:08:02.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.648 "is_configured": false, 00:08:02.648 "data_offset": 0, 00:08:02.648 "data_size": 65536 00:08:02.648 }, 00:08:02.648 { 00:08:02.648 "name": "BaseBdev2", 00:08:02.648 "uuid": "30ebe219-756d-4a13-a4c1-4d1fd26a2d72", 00:08:02.648 "is_configured": true, 00:08:02.648 "data_offset": 0, 00:08:02.648 "data_size": 65536 00:08:02.648 } 00:08:02.648 ] 00:08:02.648 }' 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.648 10:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.264 [2024-11-15 10:36:24.175314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:03.264 [2024-11-15 10:36:24.175604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.264 [2024-11-15 10:36:24.263747] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.264 [2024-11-15 10:36:24.263826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.264 [2024-11-15 10:36:24.263849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62658 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62658 ']' 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62658 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62658 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62658' 00:08:03.264 killing process with pid 62658 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62658 00:08:03.264 [2024-11-15 10:36:24.352118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.264 10:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62658 00:08:03.264 [2024-11-15 10:36:24.367093] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:04.641 00:08:04.641 real 0m5.426s 00:08:04.641 user 0m8.174s 00:08:04.641 sys 0m0.752s 00:08:04.641 ************************************ 00:08:04.641 END TEST raid_state_function_test 00:08:04.641 ************************************ 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.641 10:36:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:04.641 10:36:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:04.641 10:36:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.641 10:36:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.641 ************************************ 00:08:04.641 START TEST raid_state_function_test_sb 00:08:04.641 ************************************ 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:04.641 Process raid pid: 62917 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62917 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62917' 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62917 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62917 ']' 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.641 10:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.641 [2024-11-15 10:36:25.567524] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:08:04.641 [2024-11-15 10:36:25.567696] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.641 [2024-11-15 10:36:25.752159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.899 [2024-11-15 10:36:25.919923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.155 [2024-11-15 10:36:26.127752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.155 [2024-11-15 10:36:26.127807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.719 10:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.720 [2024-11-15 10:36:26.593068] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.720 [2024-11-15 10:36:26.593134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.720 [2024-11-15 10:36:26.593153] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.720 [2024-11-15 10:36:26.593170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.720 "name": "Existed_Raid", 00:08:05.720 "uuid": "ac46aa86-c55c-4e75-b0b4-6ba1986d58ed", 00:08:05.720 "strip_size_kb": 0, 00:08:05.720 "state": "configuring", 00:08:05.720 "raid_level": "raid1", 00:08:05.720 "superblock": true, 00:08:05.720 "num_base_bdevs": 2, 00:08:05.720 "num_base_bdevs_discovered": 0, 00:08:05.720 "num_base_bdevs_operational": 2, 00:08:05.720 "base_bdevs_list": [ 00:08:05.720 { 00:08:05.720 "name": "BaseBdev1", 00:08:05.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.720 "is_configured": false, 00:08:05.720 "data_offset": 0, 00:08:05.720 "data_size": 0 00:08:05.720 }, 00:08:05.720 { 00:08:05.720 "name": "BaseBdev2", 00:08:05.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.720 "is_configured": false, 00:08:05.720 "data_offset": 0, 00:08:05.720 "data_size": 0 00:08:05.720 } 00:08:05.720 ] 00:08:05.720 }' 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.720 10:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.977 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.977 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.977 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.977 [2024-11-15 10:36:27.097159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.977 [2024-11-15 10:36:27.097211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:05.977 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.977 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:05.977 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.977 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.978 [2024-11-15 10:36:27.105126] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.978 [2024-11-15 10:36:27.105179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.978 [2024-11-15 10:36:27.105196] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.978 [2024-11-15 10:36:27.105215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.978 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.978 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.978 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.978 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.235 [2024-11-15 10:36:27.151000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.235 BaseBdev1 00:08:06.235 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.235 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:06.235 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:06.235 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:06.235 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:06.235 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:06.235 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:06.235 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:06.235 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.235 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.235 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.235 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:06.235 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.235 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.235 [ 00:08:06.235 { 00:08:06.235 "name": "BaseBdev1", 00:08:06.235 "aliases": [ 00:08:06.235 "e151308d-2c53-4c5e-a869-7f59edcc90dc" 00:08:06.235 ], 00:08:06.235 "product_name": "Malloc disk", 00:08:06.235 "block_size": 512, 00:08:06.235 "num_blocks": 65536, 00:08:06.235 "uuid": "e151308d-2c53-4c5e-a869-7f59edcc90dc", 00:08:06.235 "assigned_rate_limits": { 00:08:06.235 "rw_ios_per_sec": 0, 00:08:06.235 "rw_mbytes_per_sec": 0, 00:08:06.235 "r_mbytes_per_sec": 0, 00:08:06.235 "w_mbytes_per_sec": 0 00:08:06.235 }, 00:08:06.235 "claimed": true, 00:08:06.235 "claim_type": "exclusive_write", 00:08:06.235 "zoned": false, 00:08:06.235 "supported_io_types": { 00:08:06.236 "read": true, 00:08:06.236 "write": true, 00:08:06.236 "unmap": true, 00:08:06.236 "flush": true, 00:08:06.236 "reset": true, 00:08:06.236 "nvme_admin": false, 00:08:06.236 "nvme_io": false, 00:08:06.236 "nvme_io_md": false, 00:08:06.236 "write_zeroes": true, 00:08:06.236 "zcopy": true, 00:08:06.236 "get_zone_info": false, 00:08:06.236 "zone_management": false, 00:08:06.236 "zone_append": false, 00:08:06.236 "compare": false, 00:08:06.236 "compare_and_write": false, 00:08:06.236 "abort": true, 00:08:06.236 "seek_hole": false, 00:08:06.236 "seek_data": false, 00:08:06.236 "copy": true, 00:08:06.236 "nvme_iov_md": false 00:08:06.236 }, 00:08:06.236 "memory_domains": [ 00:08:06.236 { 00:08:06.236 "dma_device_id": "system", 00:08:06.236 "dma_device_type": 1 00:08:06.236 }, 00:08:06.236 { 00:08:06.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.236 "dma_device_type": 2 00:08:06.236 } 00:08:06.236 ], 00:08:06.236 "driver_specific": {} 00:08:06.236 } 00:08:06.236 ] 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.236 "name": "Existed_Raid", 00:08:06.236 "uuid": "d908b300-74eb-4157-b84f-fd8e135a32ee", 00:08:06.236 "strip_size_kb": 0, 00:08:06.236 "state": "configuring", 00:08:06.236 "raid_level": "raid1", 00:08:06.236 "superblock": true, 00:08:06.236 "num_base_bdevs": 2, 00:08:06.236 "num_base_bdevs_discovered": 1, 00:08:06.236 "num_base_bdevs_operational": 2, 00:08:06.236 "base_bdevs_list": [ 00:08:06.236 { 00:08:06.236 "name": "BaseBdev1", 00:08:06.236 "uuid": "e151308d-2c53-4c5e-a869-7f59edcc90dc", 00:08:06.236 "is_configured": true, 00:08:06.236 "data_offset": 2048, 00:08:06.236 "data_size": 63488 00:08:06.236 }, 00:08:06.236 { 00:08:06.236 "name": "BaseBdev2", 00:08:06.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.236 "is_configured": false, 00:08:06.236 "data_offset": 0, 00:08:06.236 "data_size": 0 00:08:06.236 } 00:08:06.236 ] 00:08:06.236 }' 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.236 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.801 [2024-11-15 10:36:27.667209] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.801 [2024-11-15 10:36:27.667273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.801 [2024-11-15 10:36:27.679234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.801 [2024-11-15 10:36:27.681651] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.801 [2024-11-15 10:36:27.681708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.801 "name": "Existed_Raid", 00:08:06.801 "uuid": "aa217aee-8506-4a14-b919-445387366ae9", 00:08:06.801 "strip_size_kb": 0, 00:08:06.801 "state": "configuring", 00:08:06.801 "raid_level": "raid1", 00:08:06.801 "superblock": true, 00:08:06.801 "num_base_bdevs": 2, 00:08:06.801 "num_base_bdevs_discovered": 1, 00:08:06.801 "num_base_bdevs_operational": 2, 00:08:06.801 "base_bdevs_list": [ 00:08:06.801 { 00:08:06.801 "name": "BaseBdev1", 00:08:06.801 "uuid": "e151308d-2c53-4c5e-a869-7f59edcc90dc", 00:08:06.801 "is_configured": true, 00:08:06.801 "data_offset": 2048, 00:08:06.801 "data_size": 63488 00:08:06.801 }, 00:08:06.801 { 00:08:06.801 "name": "BaseBdev2", 00:08:06.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.801 "is_configured": false, 00:08:06.801 "data_offset": 0, 00:08:06.801 "data_size": 0 00:08:06.801 } 00:08:06.801 ] 00:08:06.801 }' 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.801 10:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.059 [2024-11-15 10:36:28.201955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.059 [2024-11-15 10:36:28.203202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:07.059 [2024-11-15 10:36:28.203230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:07.059 BaseBdev2 00:08:07.059 [2024-11-15 10:36:28.203584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:07.059 [2024-11-15 10:36:28.203793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:07.059 [2024-11-15 10:36:28.203823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:07.059 [2024-11-15 10:36:28.204005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.059 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.317 [ 00:08:07.317 { 00:08:07.317 "name": "BaseBdev2", 00:08:07.317 "aliases": [ 00:08:07.317 "1ea6193a-40e8-4e95-9a83-cc97a511ef56" 00:08:07.317 ], 00:08:07.317 "product_name": "Malloc disk", 00:08:07.317 "block_size": 512, 00:08:07.317 "num_blocks": 65536, 00:08:07.317 "uuid": "1ea6193a-40e8-4e95-9a83-cc97a511ef56", 00:08:07.317 "assigned_rate_limits": { 00:08:07.317 "rw_ios_per_sec": 0, 00:08:07.317 "rw_mbytes_per_sec": 0, 00:08:07.317 "r_mbytes_per_sec": 0, 00:08:07.317 "w_mbytes_per_sec": 0 00:08:07.317 }, 00:08:07.317 "claimed": true, 00:08:07.317 "claim_type": "exclusive_write", 00:08:07.317 "zoned": false, 00:08:07.317 "supported_io_types": { 00:08:07.317 "read": true, 00:08:07.317 "write": true, 00:08:07.317 "unmap": true, 00:08:07.317 "flush": true, 00:08:07.317 "reset": true, 00:08:07.317 "nvme_admin": false, 00:08:07.317 "nvme_io": false, 00:08:07.317 "nvme_io_md": false, 00:08:07.317 "write_zeroes": true, 00:08:07.317 "zcopy": true, 00:08:07.317 "get_zone_info": false, 00:08:07.317 "zone_management": false, 00:08:07.317 "zone_append": false, 00:08:07.317 "compare": false, 00:08:07.317 "compare_and_write": false, 00:08:07.317 "abort": true, 00:08:07.317 "seek_hole": false, 00:08:07.317 "seek_data": false, 00:08:07.317 "copy": true, 00:08:07.317 "nvme_iov_md": false 00:08:07.317 }, 00:08:07.317 "memory_domains": [ 00:08:07.317 { 00:08:07.317 "dma_device_id": "system", 00:08:07.317 "dma_device_type": 1 00:08:07.317 }, 00:08:07.317 { 00:08:07.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.317 "dma_device_type": 2 00:08:07.317 } 00:08:07.317 ], 00:08:07.317 "driver_specific": {} 00:08:07.317 } 00:08:07.317 ] 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.317 "name": "Existed_Raid", 00:08:07.317 "uuid": "aa217aee-8506-4a14-b919-445387366ae9", 00:08:07.317 "strip_size_kb": 0, 00:08:07.317 "state": "online", 00:08:07.317 "raid_level": "raid1", 00:08:07.317 "superblock": true, 00:08:07.317 "num_base_bdevs": 2, 00:08:07.317 "num_base_bdevs_discovered": 2, 00:08:07.317 "num_base_bdevs_operational": 2, 00:08:07.317 "base_bdevs_list": [ 00:08:07.317 { 00:08:07.317 "name": "BaseBdev1", 00:08:07.317 "uuid": "e151308d-2c53-4c5e-a869-7f59edcc90dc", 00:08:07.317 "is_configured": true, 00:08:07.317 "data_offset": 2048, 00:08:07.317 "data_size": 63488 00:08:07.317 }, 00:08:07.317 { 00:08:07.317 "name": "BaseBdev2", 00:08:07.317 "uuid": "1ea6193a-40e8-4e95-9a83-cc97a511ef56", 00:08:07.317 "is_configured": true, 00:08:07.317 "data_offset": 2048, 00:08:07.317 "data_size": 63488 00:08:07.317 } 00:08:07.317 ] 00:08:07.317 }' 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.317 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.576 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:07.576 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:07.576 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:07.576 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:07.576 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:07.576 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:07.576 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:07.576 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.576 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:07.576 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.834 [2024-11-15 10:36:28.738477] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:07.834 "name": "Existed_Raid", 00:08:07.834 "aliases": [ 00:08:07.834 "aa217aee-8506-4a14-b919-445387366ae9" 00:08:07.834 ], 00:08:07.834 "product_name": "Raid Volume", 00:08:07.834 "block_size": 512, 00:08:07.834 "num_blocks": 63488, 00:08:07.834 "uuid": "aa217aee-8506-4a14-b919-445387366ae9", 00:08:07.834 "assigned_rate_limits": { 00:08:07.834 "rw_ios_per_sec": 0, 00:08:07.834 "rw_mbytes_per_sec": 0, 00:08:07.834 "r_mbytes_per_sec": 0, 00:08:07.834 "w_mbytes_per_sec": 0 00:08:07.834 }, 00:08:07.834 "claimed": false, 00:08:07.834 "zoned": false, 00:08:07.834 "supported_io_types": { 00:08:07.834 "read": true, 00:08:07.834 "write": true, 00:08:07.834 "unmap": false, 00:08:07.834 "flush": false, 00:08:07.834 "reset": true, 00:08:07.834 "nvme_admin": false, 00:08:07.834 "nvme_io": false, 00:08:07.834 "nvme_io_md": false, 00:08:07.834 "write_zeroes": true, 00:08:07.834 "zcopy": false, 00:08:07.834 "get_zone_info": false, 00:08:07.834 "zone_management": false, 00:08:07.834 "zone_append": false, 00:08:07.834 "compare": false, 00:08:07.834 "compare_and_write": false, 00:08:07.834 "abort": false, 00:08:07.834 "seek_hole": false, 00:08:07.834 "seek_data": false, 00:08:07.834 "copy": false, 00:08:07.834 "nvme_iov_md": false 00:08:07.834 }, 00:08:07.834 "memory_domains": [ 00:08:07.834 { 00:08:07.834 "dma_device_id": "system", 00:08:07.834 "dma_device_type": 1 00:08:07.834 }, 00:08:07.834 { 00:08:07.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.834 "dma_device_type": 2 00:08:07.834 }, 00:08:07.834 { 00:08:07.834 "dma_device_id": "system", 00:08:07.834 "dma_device_type": 1 00:08:07.834 }, 00:08:07.834 { 00:08:07.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.834 "dma_device_type": 2 00:08:07.834 } 00:08:07.834 ], 00:08:07.834 "driver_specific": { 00:08:07.834 "raid": { 00:08:07.834 "uuid": "aa217aee-8506-4a14-b919-445387366ae9", 00:08:07.834 "strip_size_kb": 0, 00:08:07.834 "state": "online", 00:08:07.834 "raid_level": "raid1", 00:08:07.834 "superblock": true, 00:08:07.834 "num_base_bdevs": 2, 00:08:07.834 "num_base_bdevs_discovered": 2, 00:08:07.834 "num_base_bdevs_operational": 2, 00:08:07.834 "base_bdevs_list": [ 00:08:07.834 { 00:08:07.834 "name": "BaseBdev1", 00:08:07.834 "uuid": "e151308d-2c53-4c5e-a869-7f59edcc90dc", 00:08:07.834 "is_configured": true, 00:08:07.834 "data_offset": 2048, 00:08:07.834 "data_size": 63488 00:08:07.834 }, 00:08:07.834 { 00:08:07.834 "name": "BaseBdev2", 00:08:07.834 "uuid": "1ea6193a-40e8-4e95-9a83-cc97a511ef56", 00:08:07.834 "is_configured": true, 00:08:07.834 "data_offset": 2048, 00:08:07.834 "data_size": 63488 00:08:07.834 } 00:08:07.834 ] 00:08:07.834 } 00:08:07.834 } 00:08:07.834 }' 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:07.834 BaseBdev2' 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.834 10:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.834 [2024-11-15 10:36:28.990243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.123 "name": "Existed_Raid", 00:08:08.123 "uuid": "aa217aee-8506-4a14-b919-445387366ae9", 00:08:08.123 "strip_size_kb": 0, 00:08:08.123 "state": "online", 00:08:08.123 "raid_level": "raid1", 00:08:08.123 "superblock": true, 00:08:08.123 "num_base_bdevs": 2, 00:08:08.123 "num_base_bdevs_discovered": 1, 00:08:08.123 "num_base_bdevs_operational": 1, 00:08:08.123 "base_bdevs_list": [ 00:08:08.123 { 00:08:08.123 "name": null, 00:08:08.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.123 "is_configured": false, 00:08:08.123 "data_offset": 0, 00:08:08.123 "data_size": 63488 00:08:08.123 }, 00:08:08.123 { 00:08:08.123 "name": "BaseBdev2", 00:08:08.123 "uuid": "1ea6193a-40e8-4e95-9a83-cc97a511ef56", 00:08:08.123 "is_configured": true, 00:08:08.123 "data_offset": 2048, 00:08:08.123 "data_size": 63488 00:08:08.123 } 00:08:08.123 ] 00:08:08.123 }' 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.123 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.721 [2024-11-15 10:36:29.638739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:08.721 [2024-11-15 10:36:29.638872] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.721 [2024-11-15 10:36:29.726106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.721 [2024-11-15 10:36:29.726385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.721 [2024-11-15 10:36:29.726423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62917 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62917 ']' 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62917 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62917 00:08:08.721 killing process with pid 62917 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62917' 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62917 00:08:08.721 [2024-11-15 10:36:29.835059] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.721 10:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62917 00:08:08.721 [2024-11-15 10:36:29.849816] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.096 ************************************ 00:08:10.096 END TEST raid_state_function_test_sb 00:08:10.096 ************************************ 00:08:10.096 10:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:10.096 00:08:10.096 real 0m5.423s 00:08:10.096 user 0m8.158s 00:08:10.096 sys 0m0.773s 00:08:10.096 10:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.096 10:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.096 10:36:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:10.096 10:36:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:10.096 10:36:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.096 10:36:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.096 ************************************ 00:08:10.096 START TEST raid_superblock_test 00:08:10.096 ************************************ 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63174 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63174 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63174 ']' 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.096 10:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.096 [2024-11-15 10:36:31.042190] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:08:10.096 [2024-11-15 10:36:31.042387] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63174 ] 00:08:10.096 [2024-11-15 10:36:31.225885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.355 [2024-11-15 10:36:31.358686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.613 [2024-11-15 10:36:31.562263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.613 [2024-11-15 10:36:31.562353] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.871 10:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.871 10:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:10.871 10:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:10.871 10:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:10.871 10:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:10.871 10:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:10.871 10:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:10.871 10:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:10.871 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:10.871 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:10.871 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:10.871 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.871 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.130 malloc1 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.130 [2024-11-15 10:36:32.052268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:11.130 [2024-11-15 10:36:32.052344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.130 [2024-11-15 10:36:32.052379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:11.130 [2024-11-15 10:36:32.052395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.130 [2024-11-15 10:36:32.055222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.130 [2024-11-15 10:36:32.055266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:11.130 pt1 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.130 malloc2 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.130 [2024-11-15 10:36:32.100412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:11.130 [2024-11-15 10:36:32.100479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.130 [2024-11-15 10:36:32.100545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:11.130 [2024-11-15 10:36:32.100564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.130 [2024-11-15 10:36:32.103349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.130 [2024-11-15 10:36:32.103391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:11.130 pt2 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.130 [2024-11-15 10:36:32.108479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:11.130 [2024-11-15 10:36:32.110900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:11.130 [2024-11-15 10:36:32.111119] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:11.130 [2024-11-15 10:36:32.111144] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:11.130 [2024-11-15 10:36:32.111454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:11.130 [2024-11-15 10:36:32.111677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:11.130 [2024-11-15 10:36:32.111705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:11.130 [2024-11-15 10:36:32.111887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.130 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.131 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.131 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.131 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.131 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.131 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.131 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.131 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.131 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.131 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.131 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.131 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.131 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.131 "name": "raid_bdev1", 00:08:11.131 "uuid": "27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9", 00:08:11.131 "strip_size_kb": 0, 00:08:11.131 "state": "online", 00:08:11.131 "raid_level": "raid1", 00:08:11.131 "superblock": true, 00:08:11.131 "num_base_bdevs": 2, 00:08:11.131 "num_base_bdevs_discovered": 2, 00:08:11.131 "num_base_bdevs_operational": 2, 00:08:11.131 "base_bdevs_list": [ 00:08:11.131 { 00:08:11.131 "name": "pt1", 00:08:11.131 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.131 "is_configured": true, 00:08:11.131 "data_offset": 2048, 00:08:11.131 "data_size": 63488 00:08:11.131 }, 00:08:11.131 { 00:08:11.131 "name": "pt2", 00:08:11.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.131 "is_configured": true, 00:08:11.131 "data_offset": 2048, 00:08:11.131 "data_size": 63488 00:08:11.131 } 00:08:11.131 ] 00:08:11.131 }' 00:08:11.131 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.131 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.695 [2024-11-15 10:36:32.624982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.695 "name": "raid_bdev1", 00:08:11.695 "aliases": [ 00:08:11.695 "27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9" 00:08:11.695 ], 00:08:11.695 "product_name": "Raid Volume", 00:08:11.695 "block_size": 512, 00:08:11.695 "num_blocks": 63488, 00:08:11.695 "uuid": "27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9", 00:08:11.695 "assigned_rate_limits": { 00:08:11.695 "rw_ios_per_sec": 0, 00:08:11.695 "rw_mbytes_per_sec": 0, 00:08:11.695 "r_mbytes_per_sec": 0, 00:08:11.695 "w_mbytes_per_sec": 0 00:08:11.695 }, 00:08:11.695 "claimed": false, 00:08:11.695 "zoned": false, 00:08:11.695 "supported_io_types": { 00:08:11.695 "read": true, 00:08:11.695 "write": true, 00:08:11.695 "unmap": false, 00:08:11.695 "flush": false, 00:08:11.695 "reset": true, 00:08:11.695 "nvme_admin": false, 00:08:11.695 "nvme_io": false, 00:08:11.695 "nvme_io_md": false, 00:08:11.695 "write_zeroes": true, 00:08:11.695 "zcopy": false, 00:08:11.695 "get_zone_info": false, 00:08:11.695 "zone_management": false, 00:08:11.695 "zone_append": false, 00:08:11.695 "compare": false, 00:08:11.695 "compare_and_write": false, 00:08:11.695 "abort": false, 00:08:11.695 "seek_hole": false, 00:08:11.695 "seek_data": false, 00:08:11.695 "copy": false, 00:08:11.695 "nvme_iov_md": false 00:08:11.695 }, 00:08:11.695 "memory_domains": [ 00:08:11.695 { 00:08:11.695 "dma_device_id": "system", 00:08:11.695 "dma_device_type": 1 00:08:11.695 }, 00:08:11.695 { 00:08:11.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.695 "dma_device_type": 2 00:08:11.695 }, 00:08:11.695 { 00:08:11.695 "dma_device_id": "system", 00:08:11.695 "dma_device_type": 1 00:08:11.695 }, 00:08:11.695 { 00:08:11.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.695 "dma_device_type": 2 00:08:11.695 } 00:08:11.695 ], 00:08:11.695 "driver_specific": { 00:08:11.695 "raid": { 00:08:11.695 "uuid": "27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9", 00:08:11.695 "strip_size_kb": 0, 00:08:11.695 "state": "online", 00:08:11.695 "raid_level": "raid1", 00:08:11.695 "superblock": true, 00:08:11.695 "num_base_bdevs": 2, 00:08:11.695 "num_base_bdevs_discovered": 2, 00:08:11.695 "num_base_bdevs_operational": 2, 00:08:11.695 "base_bdevs_list": [ 00:08:11.695 { 00:08:11.695 "name": "pt1", 00:08:11.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.695 "is_configured": true, 00:08:11.695 "data_offset": 2048, 00:08:11.695 "data_size": 63488 00:08:11.695 }, 00:08:11.695 { 00:08:11.695 "name": "pt2", 00:08:11.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.695 "is_configured": true, 00:08:11.695 "data_offset": 2048, 00:08:11.695 "data_size": 63488 00:08:11.695 } 00:08:11.695 ] 00:08:11.695 } 00:08:11.695 } 00:08:11.695 }' 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:11.695 pt2' 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.695 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.952 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.952 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.953 [2024-11-15 10:36:32.880954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9 ']' 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.953 [2024-11-15 10:36:32.932623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:11.953 [2024-11-15 10:36:32.932657] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.953 [2024-11-15 10:36:32.932760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.953 [2024-11-15 10:36:32.932839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.953 [2024-11-15 10:36:32.932860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:11.953 10:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.953 [2024-11-15 10:36:33.072698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:11.953 [2024-11-15 10:36:33.075182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:11.953 [2024-11-15 10:36:33.075282] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:11.953 [2024-11-15 10:36:33.075353] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:11.953 [2024-11-15 10:36:33.075380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:11.953 [2024-11-15 10:36:33.075396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:11.953 request: 00:08:11.953 { 00:08:11.953 "name": "raid_bdev1", 00:08:11.953 "raid_level": "raid1", 00:08:11.953 "base_bdevs": [ 00:08:11.953 "malloc1", 00:08:11.953 "malloc2" 00:08:11.953 ], 00:08:11.953 "superblock": false, 00:08:11.953 "method": "bdev_raid_create", 00:08:11.953 "req_id": 1 00:08:11.953 } 00:08:11.953 Got JSON-RPC error response 00:08:11.953 response: 00:08:11.953 { 00:08:11.953 "code": -17, 00:08:11.953 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:11.953 } 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.953 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.210 [2024-11-15 10:36:33.132691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:12.210 [2024-11-15 10:36:33.132755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.210 [2024-11-15 10:36:33.132781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:12.210 [2024-11-15 10:36:33.132798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.210 [2024-11-15 10:36:33.135626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.210 [2024-11-15 10:36:33.135675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:12.210 [2024-11-15 10:36:33.135772] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:12.210 [2024-11-15 10:36:33.135851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:12.210 pt1 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.210 "name": "raid_bdev1", 00:08:12.210 "uuid": "27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9", 00:08:12.210 "strip_size_kb": 0, 00:08:12.210 "state": "configuring", 00:08:12.210 "raid_level": "raid1", 00:08:12.210 "superblock": true, 00:08:12.210 "num_base_bdevs": 2, 00:08:12.210 "num_base_bdevs_discovered": 1, 00:08:12.210 "num_base_bdevs_operational": 2, 00:08:12.210 "base_bdevs_list": [ 00:08:12.210 { 00:08:12.210 "name": "pt1", 00:08:12.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.210 "is_configured": true, 00:08:12.210 "data_offset": 2048, 00:08:12.210 "data_size": 63488 00:08:12.210 }, 00:08:12.210 { 00:08:12.210 "name": null, 00:08:12.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.210 "is_configured": false, 00:08:12.210 "data_offset": 2048, 00:08:12.210 "data_size": 63488 00:08:12.210 } 00:08:12.210 ] 00:08:12.210 }' 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.210 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.775 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:12.775 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:12.775 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:12.775 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:12.775 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.775 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.775 [2024-11-15 10:36:33.644882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:12.775 [2024-11-15 10:36:33.644971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.775 [2024-11-15 10:36:33.645005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:12.775 [2024-11-15 10:36:33.645023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.775 [2024-11-15 10:36:33.645635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.775 [2024-11-15 10:36:33.645668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:12.775 [2024-11-15 10:36:33.645779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:12.775 [2024-11-15 10:36:33.645818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:12.775 [2024-11-15 10:36:33.645965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:12.776 [2024-11-15 10:36:33.645987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:12.776 [2024-11-15 10:36:33.646297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:12.776 [2024-11-15 10:36:33.646517] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:12.776 [2024-11-15 10:36:33.646536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:12.776 [2024-11-15 10:36:33.646709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.776 pt2 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.776 "name": "raid_bdev1", 00:08:12.776 "uuid": "27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9", 00:08:12.776 "strip_size_kb": 0, 00:08:12.776 "state": "online", 00:08:12.776 "raid_level": "raid1", 00:08:12.776 "superblock": true, 00:08:12.776 "num_base_bdevs": 2, 00:08:12.776 "num_base_bdevs_discovered": 2, 00:08:12.776 "num_base_bdevs_operational": 2, 00:08:12.776 "base_bdevs_list": [ 00:08:12.776 { 00:08:12.776 "name": "pt1", 00:08:12.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.776 "is_configured": true, 00:08:12.776 "data_offset": 2048, 00:08:12.776 "data_size": 63488 00:08:12.776 }, 00:08:12.776 { 00:08:12.776 "name": "pt2", 00:08:12.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.776 "is_configured": true, 00:08:12.776 "data_offset": 2048, 00:08:12.776 "data_size": 63488 00:08:12.776 } 00:08:12.776 ] 00:08:12.776 }' 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.776 10:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.083 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:13.083 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:13.083 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.083 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.083 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.083 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.083 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.083 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.083 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.083 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.083 [2024-11-15 10:36:34.173327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.083 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.083 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.083 "name": "raid_bdev1", 00:08:13.083 "aliases": [ 00:08:13.083 "27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9" 00:08:13.083 ], 00:08:13.083 "product_name": "Raid Volume", 00:08:13.083 "block_size": 512, 00:08:13.083 "num_blocks": 63488, 00:08:13.083 "uuid": "27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9", 00:08:13.083 "assigned_rate_limits": { 00:08:13.083 "rw_ios_per_sec": 0, 00:08:13.083 "rw_mbytes_per_sec": 0, 00:08:13.083 "r_mbytes_per_sec": 0, 00:08:13.083 "w_mbytes_per_sec": 0 00:08:13.083 }, 00:08:13.083 "claimed": false, 00:08:13.083 "zoned": false, 00:08:13.083 "supported_io_types": { 00:08:13.083 "read": true, 00:08:13.083 "write": true, 00:08:13.083 "unmap": false, 00:08:13.083 "flush": false, 00:08:13.083 "reset": true, 00:08:13.083 "nvme_admin": false, 00:08:13.083 "nvme_io": false, 00:08:13.083 "nvme_io_md": false, 00:08:13.083 "write_zeroes": true, 00:08:13.083 "zcopy": false, 00:08:13.083 "get_zone_info": false, 00:08:13.083 "zone_management": false, 00:08:13.083 "zone_append": false, 00:08:13.083 "compare": false, 00:08:13.083 "compare_and_write": false, 00:08:13.083 "abort": false, 00:08:13.083 "seek_hole": false, 00:08:13.083 "seek_data": false, 00:08:13.083 "copy": false, 00:08:13.083 "nvme_iov_md": false 00:08:13.083 }, 00:08:13.083 "memory_domains": [ 00:08:13.083 { 00:08:13.083 "dma_device_id": "system", 00:08:13.083 "dma_device_type": 1 00:08:13.083 }, 00:08:13.083 { 00:08:13.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.083 "dma_device_type": 2 00:08:13.083 }, 00:08:13.083 { 00:08:13.083 "dma_device_id": "system", 00:08:13.083 "dma_device_type": 1 00:08:13.083 }, 00:08:13.083 { 00:08:13.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.083 "dma_device_type": 2 00:08:13.083 } 00:08:13.083 ], 00:08:13.083 "driver_specific": { 00:08:13.083 "raid": { 00:08:13.083 "uuid": "27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9", 00:08:13.083 "strip_size_kb": 0, 00:08:13.083 "state": "online", 00:08:13.083 "raid_level": "raid1", 00:08:13.083 "superblock": true, 00:08:13.083 "num_base_bdevs": 2, 00:08:13.083 "num_base_bdevs_discovered": 2, 00:08:13.083 "num_base_bdevs_operational": 2, 00:08:13.083 "base_bdevs_list": [ 00:08:13.083 { 00:08:13.083 "name": "pt1", 00:08:13.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.083 "is_configured": true, 00:08:13.083 "data_offset": 2048, 00:08:13.083 "data_size": 63488 00:08:13.083 }, 00:08:13.083 { 00:08:13.083 "name": "pt2", 00:08:13.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.083 "is_configured": true, 00:08:13.083 "data_offset": 2048, 00:08:13.083 "data_size": 63488 00:08:13.083 } 00:08:13.083 ] 00:08:13.083 } 00:08:13.083 } 00:08:13.083 }' 00:08:13.083 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:13.341 pt2' 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.341 [2024-11-15 10:36:34.421337] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9 '!=' 27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9 ']' 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.341 [2024-11-15 10:36:34.465091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.341 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.342 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.342 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.342 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.342 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.600 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.600 "name": "raid_bdev1", 00:08:13.600 "uuid": "27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9", 00:08:13.600 "strip_size_kb": 0, 00:08:13.600 "state": "online", 00:08:13.600 "raid_level": "raid1", 00:08:13.600 "superblock": true, 00:08:13.600 "num_base_bdevs": 2, 00:08:13.600 "num_base_bdevs_discovered": 1, 00:08:13.600 "num_base_bdevs_operational": 1, 00:08:13.600 "base_bdevs_list": [ 00:08:13.600 { 00:08:13.600 "name": null, 00:08:13.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.600 "is_configured": false, 00:08:13.600 "data_offset": 0, 00:08:13.600 "data_size": 63488 00:08:13.600 }, 00:08:13.600 { 00:08:13.600 "name": "pt2", 00:08:13.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.600 "is_configured": true, 00:08:13.600 "data_offset": 2048, 00:08:13.600 "data_size": 63488 00:08:13.600 } 00:08:13.600 ] 00:08:13.600 }' 00:08:13.600 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.600 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.859 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.859 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.859 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.859 [2024-11-15 10:36:34.981221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.859 [2024-11-15 10:36:34.981259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.859 [2024-11-15 10:36:34.981355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.859 [2024-11-15 10:36:34.981420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.859 [2024-11-15 10:36:34.981440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:13.859 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.859 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.859 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.859 10:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:13.859 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.859 10:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.116 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.116 [2024-11-15 10:36:35.053205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.116 [2024-11-15 10:36:35.053277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.116 [2024-11-15 10:36:35.053304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:14.116 [2024-11-15 10:36:35.053322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.116 [2024-11-15 10:36:35.056176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.116 [2024-11-15 10:36:35.056222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.116 [2024-11-15 10:36:35.056319] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:14.116 [2024-11-15 10:36:35.056382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.116 [2024-11-15 10:36:35.056528] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:14.116 [2024-11-15 10:36:35.056565] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:14.117 [2024-11-15 10:36:35.056852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:14.117 [2024-11-15 10:36:35.057056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:14.117 [2024-11-15 10:36:35.057073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:14.117 [2024-11-15 10:36:35.057294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.117 pt2 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.117 "name": "raid_bdev1", 00:08:14.117 "uuid": "27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9", 00:08:14.117 "strip_size_kb": 0, 00:08:14.117 "state": "online", 00:08:14.117 "raid_level": "raid1", 00:08:14.117 "superblock": true, 00:08:14.117 "num_base_bdevs": 2, 00:08:14.117 "num_base_bdevs_discovered": 1, 00:08:14.117 "num_base_bdevs_operational": 1, 00:08:14.117 "base_bdevs_list": [ 00:08:14.117 { 00:08:14.117 "name": null, 00:08:14.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.117 "is_configured": false, 00:08:14.117 "data_offset": 2048, 00:08:14.117 "data_size": 63488 00:08:14.117 }, 00:08:14.117 { 00:08:14.117 "name": "pt2", 00:08:14.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.117 "is_configured": true, 00:08:14.117 "data_offset": 2048, 00:08:14.117 "data_size": 63488 00:08:14.117 } 00:08:14.117 ] 00:08:14.117 }' 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.117 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.682 [2024-11-15 10:36:35.549339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.682 [2024-11-15 10:36:35.549381] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.682 [2024-11-15 10:36:35.549471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.682 [2024-11-15 10:36:35.549559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.682 [2024-11-15 10:36:35.549576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.682 [2024-11-15 10:36:35.613360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:14.682 [2024-11-15 10:36:35.613576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.682 [2024-11-15 10:36:35.613622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:14.682 [2024-11-15 10:36:35.613639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.682 [2024-11-15 10:36:35.616527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.682 [2024-11-15 10:36:35.616708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:14.682 [2024-11-15 10:36:35.616835] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:14.682 [2024-11-15 10:36:35.616897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:14.682 [2024-11-15 10:36:35.617084] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:14.682 [2024-11-15 10:36:35.617102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.682 [2024-11-15 10:36:35.617124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:14.682 [2024-11-15 10:36:35.617196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.682 [2024-11-15 10:36:35.617305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:14.682 [2024-11-15 10:36:35.617321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:14.682 [2024-11-15 10:36:35.617653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:14.682 [2024-11-15 10:36:35.617844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:14.682 [2024-11-15 10:36:35.617866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:14.682 [2024-11-15 10:36:35.618097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.682 pt1 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.682 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.683 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.683 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.683 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.683 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.683 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.683 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.683 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.683 "name": "raid_bdev1", 00:08:14.683 "uuid": "27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9", 00:08:14.683 "strip_size_kb": 0, 00:08:14.683 "state": "online", 00:08:14.683 "raid_level": "raid1", 00:08:14.683 "superblock": true, 00:08:14.683 "num_base_bdevs": 2, 00:08:14.683 "num_base_bdevs_discovered": 1, 00:08:14.683 "num_base_bdevs_operational": 1, 00:08:14.683 "base_bdevs_list": [ 00:08:14.683 { 00:08:14.683 "name": null, 00:08:14.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.683 "is_configured": false, 00:08:14.683 "data_offset": 2048, 00:08:14.683 "data_size": 63488 00:08:14.683 }, 00:08:14.683 { 00:08:14.683 "name": "pt2", 00:08:14.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.683 "is_configured": true, 00:08:14.683 "data_offset": 2048, 00:08:14.683 "data_size": 63488 00:08:14.683 } 00:08:14.683 ] 00:08:14.683 }' 00:08:14.683 10:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.683 10:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.248 [2024-11-15 10:36:36.209827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9 '!=' 27ce08f4-137a-4f5f-b85a-ad99f1cf8bd9 ']' 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63174 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63174 ']' 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63174 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63174 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63174' 00:08:15.248 killing process with pid 63174 00:08:15.248 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63174 00:08:15.248 [2024-11-15 10:36:36.308295] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.249 [2024-11-15 10:36:36.308418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.249 10:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63174 00:08:15.249 [2024-11-15 10:36:36.308512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.249 [2024-11-15 10:36:36.308548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:15.505 [2024-11-15 10:36:36.493876] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.437 ************************************ 00:08:16.437 END TEST raid_superblock_test 00:08:16.437 ************************************ 00:08:16.437 10:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:16.437 00:08:16.437 real 0m6.593s 00:08:16.437 user 0m10.458s 00:08:16.437 sys 0m0.918s 00:08:16.437 10:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.437 10:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.437 10:36:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:16.437 10:36:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:16.437 10:36:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.437 10:36:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.437 ************************************ 00:08:16.437 START TEST raid_read_error_test 00:08:16.437 ************************************ 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:16.437 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:16.438 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:16.438 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:16.438 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:16.438 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:16.438 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vKQD0UJ8H1 00:08:16.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.438 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63510 00:08:16.438 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63510 00:08:16.438 10:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63510 ']' 00:08:16.438 10:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:16.438 10:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.438 10:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.438 10:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.438 10:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.438 10:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.695 [2024-11-15 10:36:37.699383] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:08:16.695 [2024-11-15 10:36:37.699579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63510 ] 00:08:16.953 [2024-11-15 10:36:37.888762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.953 [2024-11-15 10:36:38.045552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.211 [2024-11-15 10:36:38.258472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.211 [2024-11-15 10:36:38.258759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.783 BaseBdev1_malloc 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.783 true 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.783 [2024-11-15 10:36:38.831666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:17.783 [2024-11-15 10:36:38.831736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.783 [2024-11-15 10:36:38.831767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:17.783 [2024-11-15 10:36:38.831787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.783 [2024-11-15 10:36:38.834584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.783 [2024-11-15 10:36:38.834635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:17.783 BaseBdev1 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.783 BaseBdev2_malloc 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.783 true 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.783 [2024-11-15 10:36:38.891416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:17.783 [2024-11-15 10:36:38.891501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.783 [2024-11-15 10:36:38.891531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:17.783 [2024-11-15 10:36:38.891550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.783 [2024-11-15 10:36:38.894275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.783 [2024-11-15 10:36:38.894453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:17.783 BaseBdev2 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.783 [2024-11-15 10:36:38.903513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.783 [2024-11-15 10:36:38.905935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.783 [2024-11-15 10:36:38.906191] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.783 [2024-11-15 10:36:38.906216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:17.783 [2024-11-15 10:36:38.906522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:17.783 [2024-11-15 10:36:38.906760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.783 [2024-11-15 10:36:38.906778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:17.783 [2024-11-15 10:36:38.906966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.783 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.784 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.784 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.784 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.784 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.042 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.042 "name": "raid_bdev1", 00:08:18.042 "uuid": "03b878e4-bc44-47ca-b7f4-da4375ca00a4", 00:08:18.042 "strip_size_kb": 0, 00:08:18.042 "state": "online", 00:08:18.042 "raid_level": "raid1", 00:08:18.042 "superblock": true, 00:08:18.042 "num_base_bdevs": 2, 00:08:18.042 "num_base_bdevs_discovered": 2, 00:08:18.042 "num_base_bdevs_operational": 2, 00:08:18.042 "base_bdevs_list": [ 00:08:18.042 { 00:08:18.042 "name": "BaseBdev1", 00:08:18.042 "uuid": "9f0bd15d-03c9-5238-84f9-f82272ef8cd2", 00:08:18.042 "is_configured": true, 00:08:18.042 "data_offset": 2048, 00:08:18.042 "data_size": 63488 00:08:18.042 }, 00:08:18.042 { 00:08:18.042 "name": "BaseBdev2", 00:08:18.042 "uuid": "59a7a9ce-9fe8-5242-91ea-89c0d1ead347", 00:08:18.042 "is_configured": true, 00:08:18.042 "data_offset": 2048, 00:08:18.042 "data_size": 63488 00:08:18.042 } 00:08:18.042 ] 00:08:18.042 }' 00:08:18.042 10:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.042 10:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.300 10:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:18.300 10:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:18.558 [2024-11-15 10:36:39.569056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.492 "name": "raid_bdev1", 00:08:19.492 "uuid": "03b878e4-bc44-47ca-b7f4-da4375ca00a4", 00:08:19.492 "strip_size_kb": 0, 00:08:19.492 "state": "online", 00:08:19.492 "raid_level": "raid1", 00:08:19.492 "superblock": true, 00:08:19.492 "num_base_bdevs": 2, 00:08:19.492 "num_base_bdevs_discovered": 2, 00:08:19.492 "num_base_bdevs_operational": 2, 00:08:19.492 "base_bdevs_list": [ 00:08:19.492 { 00:08:19.492 "name": "BaseBdev1", 00:08:19.492 "uuid": "9f0bd15d-03c9-5238-84f9-f82272ef8cd2", 00:08:19.492 "is_configured": true, 00:08:19.492 "data_offset": 2048, 00:08:19.492 "data_size": 63488 00:08:19.492 }, 00:08:19.492 { 00:08:19.492 "name": "BaseBdev2", 00:08:19.492 "uuid": "59a7a9ce-9fe8-5242-91ea-89c0d1ead347", 00:08:19.492 "is_configured": true, 00:08:19.492 "data_offset": 2048, 00:08:19.492 "data_size": 63488 00:08:19.492 } 00:08:19.492 ] 00:08:19.492 }' 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.492 10:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.059 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:20.059 10:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.059 10:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.059 [2024-11-15 10:36:40.994549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.059 [2024-11-15 10:36:40.994744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.059 [2024-11-15 10:36:40.998035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.059 [2024-11-15 10:36:40.998228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.059 [2024-11-15 10:36:40.998354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.059 [2024-11-15 10:36:40.998377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:20.059 { 00:08:20.059 "results": [ 00:08:20.059 { 00:08:20.059 "job": "raid_bdev1", 00:08:20.059 "core_mask": "0x1", 00:08:20.059 "workload": "randrw", 00:08:20.059 "percentage": 50, 00:08:20.059 "status": "finished", 00:08:20.059 "queue_depth": 1, 00:08:20.059 "io_size": 131072, 00:08:20.059 "runtime": 1.423228, 00:08:20.059 "iops": 12405.60191339687, 00:08:20.059 "mibps": 1550.7002391746087, 00:08:20.059 "io_failed": 0, 00:08:20.059 "io_timeout": 0, 00:08:20.059 "avg_latency_us": 76.47441117106727, 00:08:20.059 "min_latency_us": 43.054545454545455, 00:08:20.059 "max_latency_us": 1876.7127272727273 00:08:20.059 } 00:08:20.059 ], 00:08:20.059 "core_count": 1 00:08:20.059 } 00:08:20.059 10:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.059 10:36:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63510 00:08:20.059 10:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63510 ']' 00:08:20.059 10:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63510 00:08:20.059 10:36:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:20.059 10:36:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.059 10:36:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63510 00:08:20.059 killing process with pid 63510 00:08:20.059 10:36:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.059 10:36:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.059 10:36:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63510' 00:08:20.059 10:36:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63510 00:08:20.059 [2024-11-15 10:36:41.034697] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.059 10:36:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63510 00:08:20.059 [2024-11-15 10:36:41.155850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.437 10:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vKQD0UJ8H1 00:08:21.437 10:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:21.437 10:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:21.437 10:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:21.437 10:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:21.437 10:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.437 10:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:21.437 10:36:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:21.437 00:08:21.437 real 0m4.671s 00:08:21.437 user 0m5.931s 00:08:21.437 sys 0m0.575s 00:08:21.437 10:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.437 ************************************ 00:08:21.437 END TEST raid_read_error_test 00:08:21.437 ************************************ 00:08:21.438 10:36:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.438 10:36:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:21.438 10:36:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.438 10:36:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.438 10:36:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.438 ************************************ 00:08:21.438 START TEST raid_write_error_test 00:08:21.438 ************************************ 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4caMBxXsLg 00:08:21.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63650 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63650 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63650 ']' 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.438 10:36:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.438 [2024-11-15 10:36:42.429680] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:08:21.438 [2024-11-15 10:36:42.429833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63650 ] 00:08:21.696 [2024-11-15 10:36:42.607012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.696 [2024-11-15 10:36:42.740133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.953 [2024-11-15 10:36:42.945417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.954 [2024-11-15 10:36:42.945516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.520 BaseBdev1_malloc 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.520 true 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.520 [2024-11-15 10:36:43.448016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:22.520 [2024-11-15 10:36:43.448086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.520 [2024-11-15 10:36:43.448124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:22.520 [2024-11-15 10:36:43.448144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.520 [2024-11-15 10:36:43.450942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.520 [2024-11-15 10:36:43.451130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:22.520 BaseBdev1 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.520 BaseBdev2_malloc 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.520 true 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.520 [2024-11-15 10:36:43.508155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:22.520 [2024-11-15 10:36:43.508227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.520 [2024-11-15 10:36:43.508253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:22.520 [2024-11-15 10:36:43.508270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.520 [2024-11-15 10:36:43.511059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.520 [2024-11-15 10:36:43.511113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:22.520 BaseBdev2 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.520 [2024-11-15 10:36:43.516233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.520 [2024-11-15 10:36:43.518732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.520 [2024-11-15 10:36:43.518999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.520 [2024-11-15 10:36:43.519024] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:22.520 [2024-11-15 10:36:43.519318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:22.520 [2024-11-15 10:36:43.519584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.520 [2024-11-15 10:36:43.519603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:22.520 [2024-11-15 10:36:43.519794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.520 "name": "raid_bdev1", 00:08:22.520 "uuid": "18e5ea59-d17c-4ed0-9a89-bfd3bcf8e2aa", 00:08:22.520 "strip_size_kb": 0, 00:08:22.520 "state": "online", 00:08:22.520 "raid_level": "raid1", 00:08:22.520 "superblock": true, 00:08:22.520 "num_base_bdevs": 2, 00:08:22.520 "num_base_bdevs_discovered": 2, 00:08:22.520 "num_base_bdevs_operational": 2, 00:08:22.520 "base_bdevs_list": [ 00:08:22.520 { 00:08:22.520 "name": "BaseBdev1", 00:08:22.520 "uuid": "30b3608f-108a-53e4-ab4c-63fa269b425c", 00:08:22.520 "is_configured": true, 00:08:22.520 "data_offset": 2048, 00:08:22.520 "data_size": 63488 00:08:22.520 }, 00:08:22.520 { 00:08:22.520 "name": "BaseBdev2", 00:08:22.520 "uuid": "a4ce0e89-9184-536a-a54f-77409e948506", 00:08:22.520 "is_configured": true, 00:08:22.520 "data_offset": 2048, 00:08:22.520 "data_size": 63488 00:08:22.520 } 00:08:22.520 ] 00:08:22.520 }' 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.520 10:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.086 10:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:23.086 10:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:23.086 [2024-11-15 10:36:44.161824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.145 [2024-11-15 10:36:45.042627] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:24.145 [2024-11-15 10:36:45.042701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:24.145 [2024-11-15 10:36:45.042936] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.145 "name": "raid_bdev1", 00:08:24.145 "uuid": "18e5ea59-d17c-4ed0-9a89-bfd3bcf8e2aa", 00:08:24.145 "strip_size_kb": 0, 00:08:24.145 "state": "online", 00:08:24.145 "raid_level": "raid1", 00:08:24.145 "superblock": true, 00:08:24.145 "num_base_bdevs": 2, 00:08:24.145 "num_base_bdevs_discovered": 1, 00:08:24.145 "num_base_bdevs_operational": 1, 00:08:24.145 "base_bdevs_list": [ 00:08:24.145 { 00:08:24.145 "name": null, 00:08:24.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.145 "is_configured": false, 00:08:24.145 "data_offset": 0, 00:08:24.145 "data_size": 63488 00:08:24.145 }, 00:08:24.145 { 00:08:24.145 "name": "BaseBdev2", 00:08:24.145 "uuid": "a4ce0e89-9184-536a-a54f-77409e948506", 00:08:24.145 "is_configured": true, 00:08:24.145 "data_offset": 2048, 00:08:24.145 "data_size": 63488 00:08:24.145 } 00:08:24.145 ] 00:08:24.145 }' 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.145 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.419 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.419 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.419 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.419 [2024-11-15 10:36:45.569839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.419 [2024-11-15 10:36:45.569875] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.419 [2024-11-15 10:36:45.573070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.419 [2024-11-15 10:36:45.573116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.419 [2024-11-15 10:36:45.573206] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.419 [2024-11-15 10:36:45.573224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:24.419 { 00:08:24.420 "results": [ 00:08:24.420 { 00:08:24.420 "job": "raid_bdev1", 00:08:24.420 "core_mask": "0x1", 00:08:24.420 "workload": "randrw", 00:08:24.420 "percentage": 50, 00:08:24.420 "status": "finished", 00:08:24.420 "queue_depth": 1, 00:08:24.420 "io_size": 131072, 00:08:24.420 "runtime": 1.405303, 00:08:24.420 "iops": 14672.992230145384, 00:08:24.420 "mibps": 1834.124028768173, 00:08:24.420 "io_failed": 0, 00:08:24.420 "io_timeout": 0, 00:08:24.420 "avg_latency_us": 63.97211110131381, 00:08:24.420 "min_latency_us": 42.123636363636365, 00:08:24.420 "max_latency_us": 1802.24 00:08:24.420 } 00:08:24.420 ], 00:08:24.420 "core_count": 1 00:08:24.420 } 00:08:24.420 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.420 10:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63650 00:08:24.420 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63650 ']' 00:08:24.420 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63650 00:08:24.420 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:24.677 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.677 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63650 00:08:24.677 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.677 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.677 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63650' 00:08:24.677 killing process with pid 63650 00:08:24.677 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63650 00:08:24.678 [2024-11-15 10:36:45.616179] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.678 10:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63650 00:08:24.678 [2024-11-15 10:36:45.739089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.053 10:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:26.053 10:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4caMBxXsLg 00:08:26.053 10:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:26.053 10:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:26.053 10:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:26.053 10:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.053 10:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:26.053 10:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:26.053 00:08:26.053 real 0m4.534s 00:08:26.053 user 0m5.691s 00:08:26.053 sys 0m0.556s 00:08:26.053 10:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.053 ************************************ 00:08:26.053 END TEST raid_write_error_test 00:08:26.053 ************************************ 00:08:26.053 10:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.053 10:36:46 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:26.053 10:36:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:26.054 10:36:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:26.054 10:36:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:26.054 10:36:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.054 10:36:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.054 ************************************ 00:08:26.054 START TEST raid_state_function_test 00:08:26.054 ************************************ 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:26.054 Process raid pid: 63794 00:08:26.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63794 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63794' 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63794 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63794 ']' 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.054 10:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.054 [2024-11-15 10:36:47.002180] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:08:26.054 [2024-11-15 10:36:47.002542] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.054 [2024-11-15 10:36:47.194318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.312 [2024-11-15 10:36:47.354247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.570 [2024-11-15 10:36:47.592041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.570 [2024-11-15 10:36:47.592301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.138 [2024-11-15 10:36:48.038134] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.138 [2024-11-15 10:36:48.038334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.138 [2024-11-15 10:36:48.038365] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.138 [2024-11-15 10:36:48.038384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.138 [2024-11-15 10:36:48.038395] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:27.138 [2024-11-15 10:36:48.038410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.138 "name": "Existed_Raid", 00:08:27.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.138 "strip_size_kb": 64, 00:08:27.138 "state": "configuring", 00:08:27.138 "raid_level": "raid0", 00:08:27.138 "superblock": false, 00:08:27.138 "num_base_bdevs": 3, 00:08:27.138 "num_base_bdevs_discovered": 0, 00:08:27.138 "num_base_bdevs_operational": 3, 00:08:27.138 "base_bdevs_list": [ 00:08:27.138 { 00:08:27.138 "name": "BaseBdev1", 00:08:27.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.138 "is_configured": false, 00:08:27.138 "data_offset": 0, 00:08:27.138 "data_size": 0 00:08:27.138 }, 00:08:27.138 { 00:08:27.138 "name": "BaseBdev2", 00:08:27.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.138 "is_configured": false, 00:08:27.138 "data_offset": 0, 00:08:27.138 "data_size": 0 00:08:27.138 }, 00:08:27.138 { 00:08:27.138 "name": "BaseBdev3", 00:08:27.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.138 "is_configured": false, 00:08:27.138 "data_offset": 0, 00:08:27.138 "data_size": 0 00:08:27.138 } 00:08:27.138 ] 00:08:27.138 }' 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.138 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.397 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:27.397 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.397 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.397 [2024-11-15 10:36:48.542227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.397 [2024-11-15 10:36:48.542275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:27.397 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.397 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:27.397 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.397 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.397 [2024-11-15 10:36:48.550228] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.397 [2024-11-15 10:36:48.550293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.397 [2024-11-15 10:36:48.550310] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.397 [2024-11-15 10:36:48.550326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.397 [2024-11-15 10:36:48.550336] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:27.397 [2024-11-15 10:36:48.550351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:27.397 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.397 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:27.397 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.397 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.657 [2024-11-15 10:36:48.599086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.657 BaseBdev1 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.657 [ 00:08:27.657 { 00:08:27.657 "name": "BaseBdev1", 00:08:27.657 "aliases": [ 00:08:27.657 "ffd7e452-cf82-4044-a2a1-05c963325d8b" 00:08:27.657 ], 00:08:27.657 "product_name": "Malloc disk", 00:08:27.657 "block_size": 512, 00:08:27.657 "num_blocks": 65536, 00:08:27.657 "uuid": "ffd7e452-cf82-4044-a2a1-05c963325d8b", 00:08:27.657 "assigned_rate_limits": { 00:08:27.657 "rw_ios_per_sec": 0, 00:08:27.657 "rw_mbytes_per_sec": 0, 00:08:27.657 "r_mbytes_per_sec": 0, 00:08:27.657 "w_mbytes_per_sec": 0 00:08:27.657 }, 00:08:27.657 "claimed": true, 00:08:27.657 "claim_type": "exclusive_write", 00:08:27.657 "zoned": false, 00:08:27.657 "supported_io_types": { 00:08:27.657 "read": true, 00:08:27.657 "write": true, 00:08:27.657 "unmap": true, 00:08:27.657 "flush": true, 00:08:27.657 "reset": true, 00:08:27.657 "nvme_admin": false, 00:08:27.657 "nvme_io": false, 00:08:27.657 "nvme_io_md": false, 00:08:27.657 "write_zeroes": true, 00:08:27.657 "zcopy": true, 00:08:27.657 "get_zone_info": false, 00:08:27.657 "zone_management": false, 00:08:27.657 "zone_append": false, 00:08:27.657 "compare": false, 00:08:27.657 "compare_and_write": false, 00:08:27.657 "abort": true, 00:08:27.657 "seek_hole": false, 00:08:27.657 "seek_data": false, 00:08:27.657 "copy": true, 00:08:27.657 "nvme_iov_md": false 00:08:27.657 }, 00:08:27.657 "memory_domains": [ 00:08:27.657 { 00:08:27.657 "dma_device_id": "system", 00:08:27.657 "dma_device_type": 1 00:08:27.657 }, 00:08:27.657 { 00:08:27.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.657 "dma_device_type": 2 00:08:27.657 } 00:08:27.657 ], 00:08:27.657 "driver_specific": {} 00:08:27.657 } 00:08:27.657 ] 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.657 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.657 "name": "Existed_Raid", 00:08:27.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.657 "strip_size_kb": 64, 00:08:27.657 "state": "configuring", 00:08:27.657 "raid_level": "raid0", 00:08:27.657 "superblock": false, 00:08:27.657 "num_base_bdevs": 3, 00:08:27.657 "num_base_bdevs_discovered": 1, 00:08:27.657 "num_base_bdevs_operational": 3, 00:08:27.657 "base_bdevs_list": [ 00:08:27.657 { 00:08:27.657 "name": "BaseBdev1", 00:08:27.657 "uuid": "ffd7e452-cf82-4044-a2a1-05c963325d8b", 00:08:27.657 "is_configured": true, 00:08:27.657 "data_offset": 0, 00:08:27.657 "data_size": 65536 00:08:27.657 }, 00:08:27.657 { 00:08:27.657 "name": "BaseBdev2", 00:08:27.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.657 "is_configured": false, 00:08:27.657 "data_offset": 0, 00:08:27.658 "data_size": 0 00:08:27.658 }, 00:08:27.658 { 00:08:27.658 "name": "BaseBdev3", 00:08:27.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.658 "is_configured": false, 00:08:27.658 "data_offset": 0, 00:08:27.658 "data_size": 0 00:08:27.658 } 00:08:27.658 ] 00:08:27.658 }' 00:08:27.658 10:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.658 10:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.223 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.223 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.223 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.223 [2024-11-15 10:36:49.139300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.223 [2024-11-15 10:36:49.139529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:28.223 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.223 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:28.223 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.223 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.223 [2024-11-15 10:36:49.147348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.224 [2024-11-15 10:36:49.149867] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.224 [2024-11-15 10:36:49.149924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.224 [2024-11-15 10:36:49.149941] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:28.224 [2024-11-15 10:36:49.149958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.224 "name": "Existed_Raid", 00:08:28.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.224 "strip_size_kb": 64, 00:08:28.224 "state": "configuring", 00:08:28.224 "raid_level": "raid0", 00:08:28.224 "superblock": false, 00:08:28.224 "num_base_bdevs": 3, 00:08:28.224 "num_base_bdevs_discovered": 1, 00:08:28.224 "num_base_bdevs_operational": 3, 00:08:28.224 "base_bdevs_list": [ 00:08:28.224 { 00:08:28.224 "name": "BaseBdev1", 00:08:28.224 "uuid": "ffd7e452-cf82-4044-a2a1-05c963325d8b", 00:08:28.224 "is_configured": true, 00:08:28.224 "data_offset": 0, 00:08:28.224 "data_size": 65536 00:08:28.224 }, 00:08:28.224 { 00:08:28.224 "name": "BaseBdev2", 00:08:28.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.224 "is_configured": false, 00:08:28.224 "data_offset": 0, 00:08:28.224 "data_size": 0 00:08:28.224 }, 00:08:28.224 { 00:08:28.224 "name": "BaseBdev3", 00:08:28.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.224 "is_configured": false, 00:08:28.224 "data_offset": 0, 00:08:28.224 "data_size": 0 00:08:28.224 } 00:08:28.224 ] 00:08:28.224 }' 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.224 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.790 [2024-11-15 10:36:49.705771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.790 BaseBdev2 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.790 [ 00:08:28.790 { 00:08:28.790 "name": "BaseBdev2", 00:08:28.790 "aliases": [ 00:08:28.790 "863775e1-63b1-4c6d-97da-4cfdef0e1520" 00:08:28.790 ], 00:08:28.790 "product_name": "Malloc disk", 00:08:28.790 "block_size": 512, 00:08:28.790 "num_blocks": 65536, 00:08:28.790 "uuid": "863775e1-63b1-4c6d-97da-4cfdef0e1520", 00:08:28.790 "assigned_rate_limits": { 00:08:28.790 "rw_ios_per_sec": 0, 00:08:28.790 "rw_mbytes_per_sec": 0, 00:08:28.790 "r_mbytes_per_sec": 0, 00:08:28.790 "w_mbytes_per_sec": 0 00:08:28.790 }, 00:08:28.790 "claimed": true, 00:08:28.790 "claim_type": "exclusive_write", 00:08:28.790 "zoned": false, 00:08:28.790 "supported_io_types": { 00:08:28.790 "read": true, 00:08:28.790 "write": true, 00:08:28.790 "unmap": true, 00:08:28.790 "flush": true, 00:08:28.790 "reset": true, 00:08:28.790 "nvme_admin": false, 00:08:28.790 "nvme_io": false, 00:08:28.790 "nvme_io_md": false, 00:08:28.790 "write_zeroes": true, 00:08:28.790 "zcopy": true, 00:08:28.790 "get_zone_info": false, 00:08:28.790 "zone_management": false, 00:08:28.790 "zone_append": false, 00:08:28.790 "compare": false, 00:08:28.790 "compare_and_write": false, 00:08:28.790 "abort": true, 00:08:28.790 "seek_hole": false, 00:08:28.790 "seek_data": false, 00:08:28.790 "copy": true, 00:08:28.790 "nvme_iov_md": false 00:08:28.790 }, 00:08:28.790 "memory_domains": [ 00:08:28.790 { 00:08:28.790 "dma_device_id": "system", 00:08:28.790 "dma_device_type": 1 00:08:28.790 }, 00:08:28.790 { 00:08:28.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.790 "dma_device_type": 2 00:08:28.790 } 00:08:28.790 ], 00:08:28.790 "driver_specific": {} 00:08:28.790 } 00:08:28.790 ] 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.790 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.791 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.791 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.791 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.791 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.791 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.791 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.791 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.791 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.791 "name": "Existed_Raid", 00:08:28.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.791 "strip_size_kb": 64, 00:08:28.791 "state": "configuring", 00:08:28.791 "raid_level": "raid0", 00:08:28.791 "superblock": false, 00:08:28.791 "num_base_bdevs": 3, 00:08:28.791 "num_base_bdevs_discovered": 2, 00:08:28.791 "num_base_bdevs_operational": 3, 00:08:28.791 "base_bdevs_list": [ 00:08:28.791 { 00:08:28.791 "name": "BaseBdev1", 00:08:28.791 "uuid": "ffd7e452-cf82-4044-a2a1-05c963325d8b", 00:08:28.791 "is_configured": true, 00:08:28.791 "data_offset": 0, 00:08:28.791 "data_size": 65536 00:08:28.791 }, 00:08:28.791 { 00:08:28.791 "name": "BaseBdev2", 00:08:28.791 "uuid": "863775e1-63b1-4c6d-97da-4cfdef0e1520", 00:08:28.791 "is_configured": true, 00:08:28.791 "data_offset": 0, 00:08:28.791 "data_size": 65536 00:08:28.791 }, 00:08:28.791 { 00:08:28.791 "name": "BaseBdev3", 00:08:28.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.791 "is_configured": false, 00:08:28.791 "data_offset": 0, 00:08:28.791 "data_size": 0 00:08:28.791 } 00:08:28.791 ] 00:08:28.791 }' 00:08:28.791 10:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.791 10:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.358 [2024-11-15 10:36:50.299784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:29.358 [2024-11-15 10:36:50.299840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:29.358 [2024-11-15 10:36:50.299863] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:29.358 [2024-11-15 10:36:50.300222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:29.358 [2024-11-15 10:36:50.300456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:29.358 [2024-11-15 10:36:50.300478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:29.358 [2024-11-15 10:36:50.300848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.358 BaseBdev3 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.358 [ 00:08:29.358 { 00:08:29.358 "name": "BaseBdev3", 00:08:29.358 "aliases": [ 00:08:29.358 "fcbeb985-fe81-45e1-93b6-10e45a769d9d" 00:08:29.358 ], 00:08:29.358 "product_name": "Malloc disk", 00:08:29.358 "block_size": 512, 00:08:29.358 "num_blocks": 65536, 00:08:29.358 "uuid": "fcbeb985-fe81-45e1-93b6-10e45a769d9d", 00:08:29.358 "assigned_rate_limits": { 00:08:29.358 "rw_ios_per_sec": 0, 00:08:29.358 "rw_mbytes_per_sec": 0, 00:08:29.358 "r_mbytes_per_sec": 0, 00:08:29.358 "w_mbytes_per_sec": 0 00:08:29.358 }, 00:08:29.358 "claimed": true, 00:08:29.358 "claim_type": "exclusive_write", 00:08:29.358 "zoned": false, 00:08:29.358 "supported_io_types": { 00:08:29.358 "read": true, 00:08:29.358 "write": true, 00:08:29.358 "unmap": true, 00:08:29.358 "flush": true, 00:08:29.358 "reset": true, 00:08:29.358 "nvme_admin": false, 00:08:29.358 "nvme_io": false, 00:08:29.358 "nvme_io_md": false, 00:08:29.358 "write_zeroes": true, 00:08:29.358 "zcopy": true, 00:08:29.358 "get_zone_info": false, 00:08:29.358 "zone_management": false, 00:08:29.358 "zone_append": false, 00:08:29.358 "compare": false, 00:08:29.358 "compare_and_write": false, 00:08:29.358 "abort": true, 00:08:29.358 "seek_hole": false, 00:08:29.358 "seek_data": false, 00:08:29.358 "copy": true, 00:08:29.358 "nvme_iov_md": false 00:08:29.358 }, 00:08:29.358 "memory_domains": [ 00:08:29.358 { 00:08:29.358 "dma_device_id": "system", 00:08:29.358 "dma_device_type": 1 00:08:29.358 }, 00:08:29.358 { 00:08:29.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.358 "dma_device_type": 2 00:08:29.358 } 00:08:29.358 ], 00:08:29.358 "driver_specific": {} 00:08:29.358 } 00:08:29.358 ] 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.358 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.358 "name": "Existed_Raid", 00:08:29.358 "uuid": "77ddcc72-37f7-4684-9f3c-ec0fa70a6d4a", 00:08:29.358 "strip_size_kb": 64, 00:08:29.358 "state": "online", 00:08:29.358 "raid_level": "raid0", 00:08:29.358 "superblock": false, 00:08:29.358 "num_base_bdevs": 3, 00:08:29.358 "num_base_bdevs_discovered": 3, 00:08:29.358 "num_base_bdevs_operational": 3, 00:08:29.358 "base_bdevs_list": [ 00:08:29.358 { 00:08:29.358 "name": "BaseBdev1", 00:08:29.358 "uuid": "ffd7e452-cf82-4044-a2a1-05c963325d8b", 00:08:29.358 "is_configured": true, 00:08:29.358 "data_offset": 0, 00:08:29.358 "data_size": 65536 00:08:29.358 }, 00:08:29.358 { 00:08:29.358 "name": "BaseBdev2", 00:08:29.358 "uuid": "863775e1-63b1-4c6d-97da-4cfdef0e1520", 00:08:29.358 "is_configured": true, 00:08:29.358 "data_offset": 0, 00:08:29.358 "data_size": 65536 00:08:29.358 }, 00:08:29.358 { 00:08:29.358 "name": "BaseBdev3", 00:08:29.358 "uuid": "fcbeb985-fe81-45e1-93b6-10e45a769d9d", 00:08:29.359 "is_configured": true, 00:08:29.359 "data_offset": 0, 00:08:29.359 "data_size": 65536 00:08:29.359 } 00:08:29.359 ] 00:08:29.359 }' 00:08:29.359 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.359 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.926 [2024-11-15 10:36:50.848394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.926 "name": "Existed_Raid", 00:08:29.926 "aliases": [ 00:08:29.926 "77ddcc72-37f7-4684-9f3c-ec0fa70a6d4a" 00:08:29.926 ], 00:08:29.926 "product_name": "Raid Volume", 00:08:29.926 "block_size": 512, 00:08:29.926 "num_blocks": 196608, 00:08:29.926 "uuid": "77ddcc72-37f7-4684-9f3c-ec0fa70a6d4a", 00:08:29.926 "assigned_rate_limits": { 00:08:29.926 "rw_ios_per_sec": 0, 00:08:29.926 "rw_mbytes_per_sec": 0, 00:08:29.926 "r_mbytes_per_sec": 0, 00:08:29.926 "w_mbytes_per_sec": 0 00:08:29.926 }, 00:08:29.926 "claimed": false, 00:08:29.926 "zoned": false, 00:08:29.926 "supported_io_types": { 00:08:29.926 "read": true, 00:08:29.926 "write": true, 00:08:29.926 "unmap": true, 00:08:29.926 "flush": true, 00:08:29.926 "reset": true, 00:08:29.926 "nvme_admin": false, 00:08:29.926 "nvme_io": false, 00:08:29.926 "nvme_io_md": false, 00:08:29.926 "write_zeroes": true, 00:08:29.926 "zcopy": false, 00:08:29.926 "get_zone_info": false, 00:08:29.926 "zone_management": false, 00:08:29.926 "zone_append": false, 00:08:29.926 "compare": false, 00:08:29.926 "compare_and_write": false, 00:08:29.926 "abort": false, 00:08:29.926 "seek_hole": false, 00:08:29.926 "seek_data": false, 00:08:29.926 "copy": false, 00:08:29.926 "nvme_iov_md": false 00:08:29.926 }, 00:08:29.926 "memory_domains": [ 00:08:29.926 { 00:08:29.926 "dma_device_id": "system", 00:08:29.926 "dma_device_type": 1 00:08:29.926 }, 00:08:29.926 { 00:08:29.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.926 "dma_device_type": 2 00:08:29.926 }, 00:08:29.926 { 00:08:29.926 "dma_device_id": "system", 00:08:29.926 "dma_device_type": 1 00:08:29.926 }, 00:08:29.926 { 00:08:29.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.926 "dma_device_type": 2 00:08:29.926 }, 00:08:29.926 { 00:08:29.926 "dma_device_id": "system", 00:08:29.926 "dma_device_type": 1 00:08:29.926 }, 00:08:29.926 { 00:08:29.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.926 "dma_device_type": 2 00:08:29.926 } 00:08:29.926 ], 00:08:29.926 "driver_specific": { 00:08:29.926 "raid": { 00:08:29.926 "uuid": "77ddcc72-37f7-4684-9f3c-ec0fa70a6d4a", 00:08:29.926 "strip_size_kb": 64, 00:08:29.926 "state": "online", 00:08:29.926 "raid_level": "raid0", 00:08:29.926 "superblock": false, 00:08:29.926 "num_base_bdevs": 3, 00:08:29.926 "num_base_bdevs_discovered": 3, 00:08:29.926 "num_base_bdevs_operational": 3, 00:08:29.926 "base_bdevs_list": [ 00:08:29.926 { 00:08:29.926 "name": "BaseBdev1", 00:08:29.926 "uuid": "ffd7e452-cf82-4044-a2a1-05c963325d8b", 00:08:29.926 "is_configured": true, 00:08:29.926 "data_offset": 0, 00:08:29.926 "data_size": 65536 00:08:29.926 }, 00:08:29.926 { 00:08:29.926 "name": "BaseBdev2", 00:08:29.926 "uuid": "863775e1-63b1-4c6d-97da-4cfdef0e1520", 00:08:29.926 "is_configured": true, 00:08:29.926 "data_offset": 0, 00:08:29.926 "data_size": 65536 00:08:29.926 }, 00:08:29.926 { 00:08:29.926 "name": "BaseBdev3", 00:08:29.926 "uuid": "fcbeb985-fe81-45e1-93b6-10e45a769d9d", 00:08:29.926 "is_configured": true, 00:08:29.926 "data_offset": 0, 00:08:29.926 "data_size": 65536 00:08:29.926 } 00:08:29.926 ] 00:08:29.926 } 00:08:29.926 } 00:08:29.926 }' 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:29.926 BaseBdev2 00:08:29.926 BaseBdev3' 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:29.926 10:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.926 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.926 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.926 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.926 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.926 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.926 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:29.926 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.926 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.926 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.926 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.185 [2024-11-15 10:36:51.164134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:30.185 [2024-11-15 10:36:51.164170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.185 [2024-11-15 10:36:51.164242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.185 "name": "Existed_Raid", 00:08:30.185 "uuid": "77ddcc72-37f7-4684-9f3c-ec0fa70a6d4a", 00:08:30.185 "strip_size_kb": 64, 00:08:30.185 "state": "offline", 00:08:30.185 "raid_level": "raid0", 00:08:30.185 "superblock": false, 00:08:30.185 "num_base_bdevs": 3, 00:08:30.185 "num_base_bdevs_discovered": 2, 00:08:30.185 "num_base_bdevs_operational": 2, 00:08:30.185 "base_bdevs_list": [ 00:08:30.185 { 00:08:30.185 "name": null, 00:08:30.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.185 "is_configured": false, 00:08:30.185 "data_offset": 0, 00:08:30.185 "data_size": 65536 00:08:30.185 }, 00:08:30.185 { 00:08:30.185 "name": "BaseBdev2", 00:08:30.185 "uuid": "863775e1-63b1-4c6d-97da-4cfdef0e1520", 00:08:30.185 "is_configured": true, 00:08:30.185 "data_offset": 0, 00:08:30.185 "data_size": 65536 00:08:30.185 }, 00:08:30.185 { 00:08:30.185 "name": "BaseBdev3", 00:08:30.185 "uuid": "fcbeb985-fe81-45e1-93b6-10e45a769d9d", 00:08:30.185 "is_configured": true, 00:08:30.185 "data_offset": 0, 00:08:30.185 "data_size": 65536 00:08:30.185 } 00:08:30.185 ] 00:08:30.185 }' 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.185 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.751 [2024-11-15 10:36:51.812998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:30.751 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.009 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.009 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.009 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.009 10:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:31.009 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.009 10:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.009 [2024-11-15 10:36:51.951170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:31.009 [2024-11-15 10:36:51.951243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.009 BaseBdev2 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.009 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.010 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.010 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:31.010 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.010 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.010 [ 00:08:31.010 { 00:08:31.010 "name": "BaseBdev2", 00:08:31.010 "aliases": [ 00:08:31.010 "525a0ab2-3488-4169-a7db-0c93690d524c" 00:08:31.010 ], 00:08:31.010 "product_name": "Malloc disk", 00:08:31.010 "block_size": 512, 00:08:31.010 "num_blocks": 65536, 00:08:31.010 "uuid": "525a0ab2-3488-4169-a7db-0c93690d524c", 00:08:31.010 "assigned_rate_limits": { 00:08:31.010 "rw_ios_per_sec": 0, 00:08:31.010 "rw_mbytes_per_sec": 0, 00:08:31.010 "r_mbytes_per_sec": 0, 00:08:31.010 "w_mbytes_per_sec": 0 00:08:31.010 }, 00:08:31.010 "claimed": false, 00:08:31.010 "zoned": false, 00:08:31.010 "supported_io_types": { 00:08:31.010 "read": true, 00:08:31.010 "write": true, 00:08:31.010 "unmap": true, 00:08:31.010 "flush": true, 00:08:31.010 "reset": true, 00:08:31.010 "nvme_admin": false, 00:08:31.010 "nvme_io": false, 00:08:31.010 "nvme_io_md": false, 00:08:31.010 "write_zeroes": true, 00:08:31.010 "zcopy": true, 00:08:31.010 "get_zone_info": false, 00:08:31.010 "zone_management": false, 00:08:31.010 "zone_append": false, 00:08:31.010 "compare": false, 00:08:31.010 "compare_and_write": false, 00:08:31.010 "abort": true, 00:08:31.010 "seek_hole": false, 00:08:31.010 "seek_data": false, 00:08:31.010 "copy": true, 00:08:31.010 "nvme_iov_md": false 00:08:31.010 }, 00:08:31.010 "memory_domains": [ 00:08:31.010 { 00:08:31.010 "dma_device_id": "system", 00:08:31.010 "dma_device_type": 1 00:08:31.010 }, 00:08:31.010 { 00:08:31.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.010 "dma_device_type": 2 00:08:31.010 } 00:08:31.010 ], 00:08:31.010 "driver_specific": {} 00:08:31.010 } 00:08:31.010 ] 00:08:31.010 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.268 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:31.268 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:31.268 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.268 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:31.268 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.268 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.268 BaseBdev3 00:08:31.268 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.268 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:31.268 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:31.268 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.269 [ 00:08:31.269 { 00:08:31.269 "name": "BaseBdev3", 00:08:31.269 "aliases": [ 00:08:31.269 "70ceaeef-18f4-4cef-bed5-c22fbc29124f" 00:08:31.269 ], 00:08:31.269 "product_name": "Malloc disk", 00:08:31.269 "block_size": 512, 00:08:31.269 "num_blocks": 65536, 00:08:31.269 "uuid": "70ceaeef-18f4-4cef-bed5-c22fbc29124f", 00:08:31.269 "assigned_rate_limits": { 00:08:31.269 "rw_ios_per_sec": 0, 00:08:31.269 "rw_mbytes_per_sec": 0, 00:08:31.269 "r_mbytes_per_sec": 0, 00:08:31.269 "w_mbytes_per_sec": 0 00:08:31.269 }, 00:08:31.269 "claimed": false, 00:08:31.269 "zoned": false, 00:08:31.269 "supported_io_types": { 00:08:31.269 "read": true, 00:08:31.269 "write": true, 00:08:31.269 "unmap": true, 00:08:31.269 "flush": true, 00:08:31.269 "reset": true, 00:08:31.269 "nvme_admin": false, 00:08:31.269 "nvme_io": false, 00:08:31.269 "nvme_io_md": false, 00:08:31.269 "write_zeroes": true, 00:08:31.269 "zcopy": true, 00:08:31.269 "get_zone_info": false, 00:08:31.269 "zone_management": false, 00:08:31.269 "zone_append": false, 00:08:31.269 "compare": false, 00:08:31.269 "compare_and_write": false, 00:08:31.269 "abort": true, 00:08:31.269 "seek_hole": false, 00:08:31.269 "seek_data": false, 00:08:31.269 "copy": true, 00:08:31.269 "nvme_iov_md": false 00:08:31.269 }, 00:08:31.269 "memory_domains": [ 00:08:31.269 { 00:08:31.269 "dma_device_id": "system", 00:08:31.269 "dma_device_type": 1 00:08:31.269 }, 00:08:31.269 { 00:08:31.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.269 "dma_device_type": 2 00:08:31.269 } 00:08:31.269 ], 00:08:31.269 "driver_specific": {} 00:08:31.269 } 00:08:31.269 ] 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.269 [2024-11-15 10:36:52.252063] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.269 [2024-11-15 10:36:52.252122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.269 [2024-11-15 10:36:52.252156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.269 [2024-11-15 10:36:52.254578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.269 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.269 "name": "Existed_Raid", 00:08:31.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.269 "strip_size_kb": 64, 00:08:31.269 "state": "configuring", 00:08:31.269 "raid_level": "raid0", 00:08:31.270 "superblock": false, 00:08:31.270 "num_base_bdevs": 3, 00:08:31.270 "num_base_bdevs_discovered": 2, 00:08:31.270 "num_base_bdevs_operational": 3, 00:08:31.270 "base_bdevs_list": [ 00:08:31.270 { 00:08:31.270 "name": "BaseBdev1", 00:08:31.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.270 "is_configured": false, 00:08:31.270 "data_offset": 0, 00:08:31.270 "data_size": 0 00:08:31.270 }, 00:08:31.270 { 00:08:31.270 "name": "BaseBdev2", 00:08:31.270 "uuid": "525a0ab2-3488-4169-a7db-0c93690d524c", 00:08:31.270 "is_configured": true, 00:08:31.270 "data_offset": 0, 00:08:31.270 "data_size": 65536 00:08:31.270 }, 00:08:31.270 { 00:08:31.270 "name": "BaseBdev3", 00:08:31.270 "uuid": "70ceaeef-18f4-4cef-bed5-c22fbc29124f", 00:08:31.270 "is_configured": true, 00:08:31.270 "data_offset": 0, 00:08:31.270 "data_size": 65536 00:08:31.270 } 00:08:31.270 ] 00:08:31.270 }' 00:08:31.270 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.270 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.836 [2024-11-15 10:36:52.748207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.836 "name": "Existed_Raid", 00:08:31.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.836 "strip_size_kb": 64, 00:08:31.836 "state": "configuring", 00:08:31.836 "raid_level": "raid0", 00:08:31.836 "superblock": false, 00:08:31.836 "num_base_bdevs": 3, 00:08:31.836 "num_base_bdevs_discovered": 1, 00:08:31.836 "num_base_bdevs_operational": 3, 00:08:31.836 "base_bdevs_list": [ 00:08:31.836 { 00:08:31.836 "name": "BaseBdev1", 00:08:31.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.836 "is_configured": false, 00:08:31.836 "data_offset": 0, 00:08:31.836 "data_size": 0 00:08:31.836 }, 00:08:31.836 { 00:08:31.836 "name": null, 00:08:31.836 "uuid": "525a0ab2-3488-4169-a7db-0c93690d524c", 00:08:31.836 "is_configured": false, 00:08:31.836 "data_offset": 0, 00:08:31.836 "data_size": 65536 00:08:31.836 }, 00:08:31.836 { 00:08:31.836 "name": "BaseBdev3", 00:08:31.836 "uuid": "70ceaeef-18f4-4cef-bed5-c22fbc29124f", 00:08:31.836 "is_configured": true, 00:08:31.836 "data_offset": 0, 00:08:31.836 "data_size": 65536 00:08:31.836 } 00:08:31.836 ] 00:08:31.836 }' 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.836 10:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.404 [2024-11-15 10:36:53.378622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.404 BaseBdev1 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.404 [ 00:08:32.404 { 00:08:32.404 "name": "BaseBdev1", 00:08:32.404 "aliases": [ 00:08:32.404 "c3b11443-cd85-407b-870f-af06dd3ca69f" 00:08:32.404 ], 00:08:32.404 "product_name": "Malloc disk", 00:08:32.404 "block_size": 512, 00:08:32.404 "num_blocks": 65536, 00:08:32.404 "uuid": "c3b11443-cd85-407b-870f-af06dd3ca69f", 00:08:32.404 "assigned_rate_limits": { 00:08:32.404 "rw_ios_per_sec": 0, 00:08:32.404 "rw_mbytes_per_sec": 0, 00:08:32.404 "r_mbytes_per_sec": 0, 00:08:32.404 "w_mbytes_per_sec": 0 00:08:32.404 }, 00:08:32.404 "claimed": true, 00:08:32.404 "claim_type": "exclusive_write", 00:08:32.404 "zoned": false, 00:08:32.404 "supported_io_types": { 00:08:32.404 "read": true, 00:08:32.404 "write": true, 00:08:32.404 "unmap": true, 00:08:32.404 "flush": true, 00:08:32.404 "reset": true, 00:08:32.404 "nvme_admin": false, 00:08:32.404 "nvme_io": false, 00:08:32.404 "nvme_io_md": false, 00:08:32.404 "write_zeroes": true, 00:08:32.404 "zcopy": true, 00:08:32.404 "get_zone_info": false, 00:08:32.404 "zone_management": false, 00:08:32.404 "zone_append": false, 00:08:32.404 "compare": false, 00:08:32.404 "compare_and_write": false, 00:08:32.404 "abort": true, 00:08:32.404 "seek_hole": false, 00:08:32.404 "seek_data": false, 00:08:32.404 "copy": true, 00:08:32.404 "nvme_iov_md": false 00:08:32.404 }, 00:08:32.404 "memory_domains": [ 00:08:32.404 { 00:08:32.404 "dma_device_id": "system", 00:08:32.404 "dma_device_type": 1 00:08:32.404 }, 00:08:32.404 { 00:08:32.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.404 "dma_device_type": 2 00:08:32.404 } 00:08:32.404 ], 00:08:32.404 "driver_specific": {} 00:08:32.404 } 00:08:32.404 ] 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.404 "name": "Existed_Raid", 00:08:32.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.404 "strip_size_kb": 64, 00:08:32.404 "state": "configuring", 00:08:32.404 "raid_level": "raid0", 00:08:32.404 "superblock": false, 00:08:32.404 "num_base_bdevs": 3, 00:08:32.404 "num_base_bdevs_discovered": 2, 00:08:32.404 "num_base_bdevs_operational": 3, 00:08:32.404 "base_bdevs_list": [ 00:08:32.404 { 00:08:32.404 "name": "BaseBdev1", 00:08:32.404 "uuid": "c3b11443-cd85-407b-870f-af06dd3ca69f", 00:08:32.404 "is_configured": true, 00:08:32.404 "data_offset": 0, 00:08:32.404 "data_size": 65536 00:08:32.404 }, 00:08:32.404 { 00:08:32.404 "name": null, 00:08:32.404 "uuid": "525a0ab2-3488-4169-a7db-0c93690d524c", 00:08:32.404 "is_configured": false, 00:08:32.404 "data_offset": 0, 00:08:32.404 "data_size": 65536 00:08:32.404 }, 00:08:32.404 { 00:08:32.404 "name": "BaseBdev3", 00:08:32.404 "uuid": "70ceaeef-18f4-4cef-bed5-c22fbc29124f", 00:08:32.404 "is_configured": true, 00:08:32.404 "data_offset": 0, 00:08:32.404 "data_size": 65536 00:08:32.404 } 00:08:32.404 ] 00:08:32.404 }' 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.404 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.971 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.971 10:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:32.971 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.971 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.971 10:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.971 [2024-11-15 10:36:54.018840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.971 "name": "Existed_Raid", 00:08:32.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.971 "strip_size_kb": 64, 00:08:32.971 "state": "configuring", 00:08:32.971 "raid_level": "raid0", 00:08:32.971 "superblock": false, 00:08:32.971 "num_base_bdevs": 3, 00:08:32.971 "num_base_bdevs_discovered": 1, 00:08:32.971 "num_base_bdevs_operational": 3, 00:08:32.971 "base_bdevs_list": [ 00:08:32.971 { 00:08:32.971 "name": "BaseBdev1", 00:08:32.971 "uuid": "c3b11443-cd85-407b-870f-af06dd3ca69f", 00:08:32.971 "is_configured": true, 00:08:32.971 "data_offset": 0, 00:08:32.971 "data_size": 65536 00:08:32.971 }, 00:08:32.971 { 00:08:32.971 "name": null, 00:08:32.971 "uuid": "525a0ab2-3488-4169-a7db-0c93690d524c", 00:08:32.971 "is_configured": false, 00:08:32.971 "data_offset": 0, 00:08:32.971 "data_size": 65536 00:08:32.971 }, 00:08:32.971 { 00:08:32.971 "name": null, 00:08:32.971 "uuid": "70ceaeef-18f4-4cef-bed5-c22fbc29124f", 00:08:32.971 "is_configured": false, 00:08:32.971 "data_offset": 0, 00:08:32.971 "data_size": 65536 00:08:32.971 } 00:08:32.971 ] 00:08:32.971 }' 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.971 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.537 [2024-11-15 10:36:54.583019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.537 "name": "Existed_Raid", 00:08:33.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.537 "strip_size_kb": 64, 00:08:33.537 "state": "configuring", 00:08:33.537 "raid_level": "raid0", 00:08:33.537 "superblock": false, 00:08:33.537 "num_base_bdevs": 3, 00:08:33.537 "num_base_bdevs_discovered": 2, 00:08:33.537 "num_base_bdevs_operational": 3, 00:08:33.537 "base_bdevs_list": [ 00:08:33.537 { 00:08:33.537 "name": "BaseBdev1", 00:08:33.537 "uuid": "c3b11443-cd85-407b-870f-af06dd3ca69f", 00:08:33.537 "is_configured": true, 00:08:33.537 "data_offset": 0, 00:08:33.537 "data_size": 65536 00:08:33.537 }, 00:08:33.537 { 00:08:33.537 "name": null, 00:08:33.537 "uuid": "525a0ab2-3488-4169-a7db-0c93690d524c", 00:08:33.537 "is_configured": false, 00:08:33.537 "data_offset": 0, 00:08:33.537 "data_size": 65536 00:08:33.537 }, 00:08:33.537 { 00:08:33.537 "name": "BaseBdev3", 00:08:33.537 "uuid": "70ceaeef-18f4-4cef-bed5-c22fbc29124f", 00:08:33.537 "is_configured": true, 00:08:33.537 "data_offset": 0, 00:08:33.537 "data_size": 65536 00:08:33.537 } 00:08:33.537 ] 00:08:33.537 }' 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.537 10:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.104 [2024-11-15 10:36:55.163229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.104 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.362 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.362 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.362 "name": "Existed_Raid", 00:08:34.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.362 "strip_size_kb": 64, 00:08:34.362 "state": "configuring", 00:08:34.362 "raid_level": "raid0", 00:08:34.362 "superblock": false, 00:08:34.362 "num_base_bdevs": 3, 00:08:34.362 "num_base_bdevs_discovered": 1, 00:08:34.362 "num_base_bdevs_operational": 3, 00:08:34.362 "base_bdevs_list": [ 00:08:34.362 { 00:08:34.362 "name": null, 00:08:34.362 "uuid": "c3b11443-cd85-407b-870f-af06dd3ca69f", 00:08:34.362 "is_configured": false, 00:08:34.362 "data_offset": 0, 00:08:34.362 "data_size": 65536 00:08:34.362 }, 00:08:34.362 { 00:08:34.362 "name": null, 00:08:34.362 "uuid": "525a0ab2-3488-4169-a7db-0c93690d524c", 00:08:34.362 "is_configured": false, 00:08:34.362 "data_offset": 0, 00:08:34.362 "data_size": 65536 00:08:34.362 }, 00:08:34.362 { 00:08:34.362 "name": "BaseBdev3", 00:08:34.362 "uuid": "70ceaeef-18f4-4cef-bed5-c22fbc29124f", 00:08:34.362 "is_configured": true, 00:08:34.362 "data_offset": 0, 00:08:34.362 "data_size": 65536 00:08:34.362 } 00:08:34.362 ] 00:08:34.362 }' 00:08:34.362 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.362 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.620 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.620 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:34.620 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.620 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.878 [2024-11-15 10:36:55.816580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.878 "name": "Existed_Raid", 00:08:34.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.878 "strip_size_kb": 64, 00:08:34.878 "state": "configuring", 00:08:34.878 "raid_level": "raid0", 00:08:34.878 "superblock": false, 00:08:34.878 "num_base_bdevs": 3, 00:08:34.878 "num_base_bdevs_discovered": 2, 00:08:34.878 "num_base_bdevs_operational": 3, 00:08:34.878 "base_bdevs_list": [ 00:08:34.878 { 00:08:34.878 "name": null, 00:08:34.878 "uuid": "c3b11443-cd85-407b-870f-af06dd3ca69f", 00:08:34.878 "is_configured": false, 00:08:34.878 "data_offset": 0, 00:08:34.878 "data_size": 65536 00:08:34.878 }, 00:08:34.878 { 00:08:34.878 "name": "BaseBdev2", 00:08:34.878 "uuid": "525a0ab2-3488-4169-a7db-0c93690d524c", 00:08:34.878 "is_configured": true, 00:08:34.878 "data_offset": 0, 00:08:34.878 "data_size": 65536 00:08:34.878 }, 00:08:34.878 { 00:08:34.878 "name": "BaseBdev3", 00:08:34.878 "uuid": "70ceaeef-18f4-4cef-bed5-c22fbc29124f", 00:08:34.878 "is_configured": true, 00:08:34.878 "data_offset": 0, 00:08:34.878 "data_size": 65536 00:08:34.878 } 00:08:34.878 ] 00:08:34.878 }' 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.878 10:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c3b11443-cd85-407b-870f-af06dd3ca69f 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.459 [2024-11-15 10:36:56.450982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:35.459 [2024-11-15 10:36:56.451042] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:35.459 [2024-11-15 10:36:56.451060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:35.459 [2024-11-15 10:36:56.451386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:35.459 [2024-11-15 10:36:56.451631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:35.459 [2024-11-15 10:36:56.451658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:35.459 [2024-11-15 10:36:56.451945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.459 NewBaseBdev 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.459 [ 00:08:35.459 { 00:08:35.459 "name": "NewBaseBdev", 00:08:35.459 "aliases": [ 00:08:35.459 "c3b11443-cd85-407b-870f-af06dd3ca69f" 00:08:35.459 ], 00:08:35.459 "product_name": "Malloc disk", 00:08:35.459 "block_size": 512, 00:08:35.459 "num_blocks": 65536, 00:08:35.459 "uuid": "c3b11443-cd85-407b-870f-af06dd3ca69f", 00:08:35.459 "assigned_rate_limits": { 00:08:35.459 "rw_ios_per_sec": 0, 00:08:35.459 "rw_mbytes_per_sec": 0, 00:08:35.459 "r_mbytes_per_sec": 0, 00:08:35.459 "w_mbytes_per_sec": 0 00:08:35.459 }, 00:08:35.459 "claimed": true, 00:08:35.459 "claim_type": "exclusive_write", 00:08:35.459 "zoned": false, 00:08:35.459 "supported_io_types": { 00:08:35.459 "read": true, 00:08:35.459 "write": true, 00:08:35.459 "unmap": true, 00:08:35.459 "flush": true, 00:08:35.459 "reset": true, 00:08:35.459 "nvme_admin": false, 00:08:35.459 "nvme_io": false, 00:08:35.459 "nvme_io_md": false, 00:08:35.459 "write_zeroes": true, 00:08:35.459 "zcopy": true, 00:08:35.459 "get_zone_info": false, 00:08:35.459 "zone_management": false, 00:08:35.459 "zone_append": false, 00:08:35.459 "compare": false, 00:08:35.459 "compare_and_write": false, 00:08:35.459 "abort": true, 00:08:35.459 "seek_hole": false, 00:08:35.459 "seek_data": false, 00:08:35.459 "copy": true, 00:08:35.459 "nvme_iov_md": false 00:08:35.459 }, 00:08:35.459 "memory_domains": [ 00:08:35.459 { 00:08:35.459 "dma_device_id": "system", 00:08:35.459 "dma_device_type": 1 00:08:35.459 }, 00:08:35.459 { 00:08:35.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.459 "dma_device_type": 2 00:08:35.459 } 00:08:35.459 ], 00:08:35.459 "driver_specific": {} 00:08:35.459 } 00:08:35.459 ] 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.459 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.460 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.460 "name": "Existed_Raid", 00:08:35.460 "uuid": "ccba2999-b75a-40f5-bf91-9c4b58917e19", 00:08:35.460 "strip_size_kb": 64, 00:08:35.460 "state": "online", 00:08:35.460 "raid_level": "raid0", 00:08:35.460 "superblock": false, 00:08:35.460 "num_base_bdevs": 3, 00:08:35.460 "num_base_bdevs_discovered": 3, 00:08:35.460 "num_base_bdevs_operational": 3, 00:08:35.460 "base_bdevs_list": [ 00:08:35.460 { 00:08:35.460 "name": "NewBaseBdev", 00:08:35.460 "uuid": "c3b11443-cd85-407b-870f-af06dd3ca69f", 00:08:35.460 "is_configured": true, 00:08:35.460 "data_offset": 0, 00:08:35.460 "data_size": 65536 00:08:35.460 }, 00:08:35.460 { 00:08:35.460 "name": "BaseBdev2", 00:08:35.460 "uuid": "525a0ab2-3488-4169-a7db-0c93690d524c", 00:08:35.460 "is_configured": true, 00:08:35.460 "data_offset": 0, 00:08:35.460 "data_size": 65536 00:08:35.460 }, 00:08:35.460 { 00:08:35.460 "name": "BaseBdev3", 00:08:35.460 "uuid": "70ceaeef-18f4-4cef-bed5-c22fbc29124f", 00:08:35.460 "is_configured": true, 00:08:35.460 "data_offset": 0, 00:08:35.460 "data_size": 65536 00:08:35.460 } 00:08:35.460 ] 00:08:35.460 }' 00:08:35.460 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.460 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.026 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:36.026 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:36.026 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:36.026 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:36.026 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:36.026 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:36.026 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:36.026 10:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:36.026 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.026 10:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.026 [2024-11-15 10:36:56.983567] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.026 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.026 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.026 "name": "Existed_Raid", 00:08:36.026 "aliases": [ 00:08:36.026 "ccba2999-b75a-40f5-bf91-9c4b58917e19" 00:08:36.026 ], 00:08:36.026 "product_name": "Raid Volume", 00:08:36.026 "block_size": 512, 00:08:36.026 "num_blocks": 196608, 00:08:36.026 "uuid": "ccba2999-b75a-40f5-bf91-9c4b58917e19", 00:08:36.026 "assigned_rate_limits": { 00:08:36.026 "rw_ios_per_sec": 0, 00:08:36.026 "rw_mbytes_per_sec": 0, 00:08:36.026 "r_mbytes_per_sec": 0, 00:08:36.026 "w_mbytes_per_sec": 0 00:08:36.026 }, 00:08:36.026 "claimed": false, 00:08:36.026 "zoned": false, 00:08:36.026 "supported_io_types": { 00:08:36.026 "read": true, 00:08:36.026 "write": true, 00:08:36.026 "unmap": true, 00:08:36.026 "flush": true, 00:08:36.026 "reset": true, 00:08:36.026 "nvme_admin": false, 00:08:36.026 "nvme_io": false, 00:08:36.026 "nvme_io_md": false, 00:08:36.026 "write_zeroes": true, 00:08:36.026 "zcopy": false, 00:08:36.026 "get_zone_info": false, 00:08:36.026 "zone_management": false, 00:08:36.026 "zone_append": false, 00:08:36.026 "compare": false, 00:08:36.026 "compare_and_write": false, 00:08:36.026 "abort": false, 00:08:36.026 "seek_hole": false, 00:08:36.026 "seek_data": false, 00:08:36.026 "copy": false, 00:08:36.026 "nvme_iov_md": false 00:08:36.026 }, 00:08:36.026 "memory_domains": [ 00:08:36.026 { 00:08:36.026 "dma_device_id": "system", 00:08:36.026 "dma_device_type": 1 00:08:36.026 }, 00:08:36.026 { 00:08:36.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.026 "dma_device_type": 2 00:08:36.026 }, 00:08:36.026 { 00:08:36.027 "dma_device_id": "system", 00:08:36.027 "dma_device_type": 1 00:08:36.027 }, 00:08:36.027 { 00:08:36.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.027 "dma_device_type": 2 00:08:36.027 }, 00:08:36.027 { 00:08:36.027 "dma_device_id": "system", 00:08:36.027 "dma_device_type": 1 00:08:36.027 }, 00:08:36.027 { 00:08:36.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.027 "dma_device_type": 2 00:08:36.027 } 00:08:36.027 ], 00:08:36.027 "driver_specific": { 00:08:36.027 "raid": { 00:08:36.027 "uuid": "ccba2999-b75a-40f5-bf91-9c4b58917e19", 00:08:36.027 "strip_size_kb": 64, 00:08:36.027 "state": "online", 00:08:36.027 "raid_level": "raid0", 00:08:36.027 "superblock": false, 00:08:36.027 "num_base_bdevs": 3, 00:08:36.027 "num_base_bdevs_discovered": 3, 00:08:36.027 "num_base_bdevs_operational": 3, 00:08:36.027 "base_bdevs_list": [ 00:08:36.027 { 00:08:36.027 "name": "NewBaseBdev", 00:08:36.027 "uuid": "c3b11443-cd85-407b-870f-af06dd3ca69f", 00:08:36.027 "is_configured": true, 00:08:36.027 "data_offset": 0, 00:08:36.027 "data_size": 65536 00:08:36.027 }, 00:08:36.027 { 00:08:36.027 "name": "BaseBdev2", 00:08:36.027 "uuid": "525a0ab2-3488-4169-a7db-0c93690d524c", 00:08:36.027 "is_configured": true, 00:08:36.027 "data_offset": 0, 00:08:36.027 "data_size": 65536 00:08:36.027 }, 00:08:36.027 { 00:08:36.027 "name": "BaseBdev3", 00:08:36.027 "uuid": "70ceaeef-18f4-4cef-bed5-c22fbc29124f", 00:08:36.027 "is_configured": true, 00:08:36.027 "data_offset": 0, 00:08:36.027 "data_size": 65536 00:08:36.027 } 00:08:36.027 ] 00:08:36.027 } 00:08:36.027 } 00:08:36.027 }' 00:08:36.027 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.027 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:36.027 BaseBdev2 00:08:36.027 BaseBdev3' 00:08:36.027 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.027 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.027 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.027 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:36.027 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.027 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.027 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.027 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.286 [2024-11-15 10:36:57.295237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.286 [2024-11-15 10:36:57.295275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.286 [2024-11-15 10:36:57.295378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.286 [2024-11-15 10:36:57.295467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.286 [2024-11-15 10:36:57.295505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63794 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63794 ']' 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63794 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63794 00:08:36.286 killing process with pid 63794 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63794' 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63794 00:08:36.286 [2024-11-15 10:36:57.334126] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.286 10:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63794 00:08:36.545 [2024-11-15 10:36:57.605715] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.920 ************************************ 00:08:37.920 END TEST raid_state_function_test 00:08:37.920 ************************************ 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:37.920 00:08:37.920 real 0m11.757s 00:08:37.920 user 0m19.518s 00:08:37.920 sys 0m1.582s 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.920 10:36:58 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:37.920 10:36:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:37.920 10:36:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.920 10:36:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.920 ************************************ 00:08:37.920 START TEST raid_state_function_test_sb 00:08:37.920 ************************************ 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64426 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:37.920 Process raid pid: 64426 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64426' 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64426 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64426 ']' 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.920 10:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.920 [2024-11-15 10:36:58.800175] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:08:37.920 [2024-11-15 10:36:58.800318] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.920 [2024-11-15 10:36:58.979167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.178 [2024-11-15 10:36:59.112629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.436 [2024-11-15 10:36:59.345015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.436 [2024-11-15 10:36:59.345073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.695 10:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.695 10:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:38.695 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.695 10:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.695 10:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.695 [2024-11-15 10:36:59.853077] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.695 [2024-11-15 10:36:59.853139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.695 [2024-11-15 10:36:59.853158] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.695 [2024-11-15 10:36:59.853176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.695 [2024-11-15 10:36:59.853187] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.695 [2024-11-15 10:36:59.853211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.954 "name": "Existed_Raid", 00:08:38.954 "uuid": "87b14a9f-017d-4345-9ab6-ff9fca27beea", 00:08:38.954 "strip_size_kb": 64, 00:08:38.954 "state": "configuring", 00:08:38.954 "raid_level": "raid0", 00:08:38.954 "superblock": true, 00:08:38.954 "num_base_bdevs": 3, 00:08:38.954 "num_base_bdevs_discovered": 0, 00:08:38.954 "num_base_bdevs_operational": 3, 00:08:38.954 "base_bdevs_list": [ 00:08:38.954 { 00:08:38.954 "name": "BaseBdev1", 00:08:38.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.954 "is_configured": false, 00:08:38.954 "data_offset": 0, 00:08:38.954 "data_size": 0 00:08:38.954 }, 00:08:38.954 { 00:08:38.954 "name": "BaseBdev2", 00:08:38.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.954 "is_configured": false, 00:08:38.954 "data_offset": 0, 00:08:38.954 "data_size": 0 00:08:38.954 }, 00:08:38.954 { 00:08:38.954 "name": "BaseBdev3", 00:08:38.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.954 "is_configured": false, 00:08:38.954 "data_offset": 0, 00:08:38.954 "data_size": 0 00:08:38.954 } 00:08:38.954 ] 00:08:38.954 }' 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.954 10:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.212 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.212 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.212 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.212 [2024-11-15 10:37:00.357151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.212 [2024-11-15 10:37:00.357200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:39.212 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.212 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:39.212 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.212 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.212 [2024-11-15 10:37:00.365175] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.212 [2024-11-15 10:37:00.365244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.212 [2024-11-15 10:37:00.365259] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.212 [2024-11-15 10:37:00.365276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.212 [2024-11-15 10:37:00.365286] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:39.212 [2024-11-15 10:37:00.365301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:39.212 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.212 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:39.212 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.212 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.470 [2024-11-15 10:37:00.410208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.470 BaseBdev1 00:08:39.470 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.470 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:39.470 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:39.470 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.470 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:39.470 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.470 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.470 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.470 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.470 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.470 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.470 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:39.470 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.470 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.470 [ 00:08:39.470 { 00:08:39.470 "name": "BaseBdev1", 00:08:39.470 "aliases": [ 00:08:39.470 "915ec58e-4dc0-4bd4-8bfb-560ea109b677" 00:08:39.470 ], 00:08:39.470 "product_name": "Malloc disk", 00:08:39.470 "block_size": 512, 00:08:39.470 "num_blocks": 65536, 00:08:39.470 "uuid": "915ec58e-4dc0-4bd4-8bfb-560ea109b677", 00:08:39.471 "assigned_rate_limits": { 00:08:39.471 "rw_ios_per_sec": 0, 00:08:39.471 "rw_mbytes_per_sec": 0, 00:08:39.471 "r_mbytes_per_sec": 0, 00:08:39.471 "w_mbytes_per_sec": 0 00:08:39.471 }, 00:08:39.471 "claimed": true, 00:08:39.471 "claim_type": "exclusive_write", 00:08:39.471 "zoned": false, 00:08:39.471 "supported_io_types": { 00:08:39.471 "read": true, 00:08:39.471 "write": true, 00:08:39.471 "unmap": true, 00:08:39.471 "flush": true, 00:08:39.471 "reset": true, 00:08:39.471 "nvme_admin": false, 00:08:39.471 "nvme_io": false, 00:08:39.471 "nvme_io_md": false, 00:08:39.471 "write_zeroes": true, 00:08:39.471 "zcopy": true, 00:08:39.471 "get_zone_info": false, 00:08:39.471 "zone_management": false, 00:08:39.471 "zone_append": false, 00:08:39.471 "compare": false, 00:08:39.471 "compare_and_write": false, 00:08:39.471 "abort": true, 00:08:39.471 "seek_hole": false, 00:08:39.471 "seek_data": false, 00:08:39.471 "copy": true, 00:08:39.471 "nvme_iov_md": false 00:08:39.471 }, 00:08:39.471 "memory_domains": [ 00:08:39.471 { 00:08:39.471 "dma_device_id": "system", 00:08:39.471 "dma_device_type": 1 00:08:39.471 }, 00:08:39.471 { 00:08:39.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.471 "dma_device_type": 2 00:08:39.471 } 00:08:39.471 ], 00:08:39.471 "driver_specific": {} 00:08:39.471 } 00:08:39.471 ] 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.471 "name": "Existed_Raid", 00:08:39.471 "uuid": "bd99a979-965d-4aed-ac7d-00eb1922bb5c", 00:08:39.471 "strip_size_kb": 64, 00:08:39.471 "state": "configuring", 00:08:39.471 "raid_level": "raid0", 00:08:39.471 "superblock": true, 00:08:39.471 "num_base_bdevs": 3, 00:08:39.471 "num_base_bdevs_discovered": 1, 00:08:39.471 "num_base_bdevs_operational": 3, 00:08:39.471 "base_bdevs_list": [ 00:08:39.471 { 00:08:39.471 "name": "BaseBdev1", 00:08:39.471 "uuid": "915ec58e-4dc0-4bd4-8bfb-560ea109b677", 00:08:39.471 "is_configured": true, 00:08:39.471 "data_offset": 2048, 00:08:39.471 "data_size": 63488 00:08:39.471 }, 00:08:39.471 { 00:08:39.471 "name": "BaseBdev2", 00:08:39.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.471 "is_configured": false, 00:08:39.471 "data_offset": 0, 00:08:39.471 "data_size": 0 00:08:39.471 }, 00:08:39.471 { 00:08:39.471 "name": "BaseBdev3", 00:08:39.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.471 "is_configured": false, 00:08:39.471 "data_offset": 0, 00:08:39.471 "data_size": 0 00:08:39.471 } 00:08:39.471 ] 00:08:39.471 }' 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.471 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.036 [2024-11-15 10:37:00.966454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.036 [2024-11-15 10:37:00.966537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.036 [2024-11-15 10:37:00.974523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.036 [2024-11-15 10:37:00.976947] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.036 [2024-11-15 10:37:00.976996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.036 [2024-11-15 10:37:00.977013] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:40.036 [2024-11-15 10:37:00.977029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.036 10:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.036 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.036 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.036 "name": "Existed_Raid", 00:08:40.036 "uuid": "422c7b8b-1edc-4219-a56b-b719d4e0ec52", 00:08:40.036 "strip_size_kb": 64, 00:08:40.037 "state": "configuring", 00:08:40.037 "raid_level": "raid0", 00:08:40.037 "superblock": true, 00:08:40.037 "num_base_bdevs": 3, 00:08:40.037 "num_base_bdevs_discovered": 1, 00:08:40.037 "num_base_bdevs_operational": 3, 00:08:40.037 "base_bdevs_list": [ 00:08:40.037 { 00:08:40.037 "name": "BaseBdev1", 00:08:40.037 "uuid": "915ec58e-4dc0-4bd4-8bfb-560ea109b677", 00:08:40.037 "is_configured": true, 00:08:40.037 "data_offset": 2048, 00:08:40.037 "data_size": 63488 00:08:40.037 }, 00:08:40.037 { 00:08:40.037 "name": "BaseBdev2", 00:08:40.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.037 "is_configured": false, 00:08:40.037 "data_offset": 0, 00:08:40.037 "data_size": 0 00:08:40.037 }, 00:08:40.037 { 00:08:40.037 "name": "BaseBdev3", 00:08:40.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.037 "is_configured": false, 00:08:40.037 "data_offset": 0, 00:08:40.037 "data_size": 0 00:08:40.037 } 00:08:40.037 ] 00:08:40.037 }' 00:08:40.037 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.037 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.601 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:40.601 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.601 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.601 [2024-11-15 10:37:01.521075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.601 BaseBdev2 00:08:40.601 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.601 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.602 [ 00:08:40.602 { 00:08:40.602 "name": "BaseBdev2", 00:08:40.602 "aliases": [ 00:08:40.602 "8c9c18af-07cc-4b68-8715-75e977c4443b" 00:08:40.602 ], 00:08:40.602 "product_name": "Malloc disk", 00:08:40.602 "block_size": 512, 00:08:40.602 "num_blocks": 65536, 00:08:40.602 "uuid": "8c9c18af-07cc-4b68-8715-75e977c4443b", 00:08:40.602 "assigned_rate_limits": { 00:08:40.602 "rw_ios_per_sec": 0, 00:08:40.602 "rw_mbytes_per_sec": 0, 00:08:40.602 "r_mbytes_per_sec": 0, 00:08:40.602 "w_mbytes_per_sec": 0 00:08:40.602 }, 00:08:40.602 "claimed": true, 00:08:40.602 "claim_type": "exclusive_write", 00:08:40.602 "zoned": false, 00:08:40.602 "supported_io_types": { 00:08:40.602 "read": true, 00:08:40.602 "write": true, 00:08:40.602 "unmap": true, 00:08:40.602 "flush": true, 00:08:40.602 "reset": true, 00:08:40.602 "nvme_admin": false, 00:08:40.602 "nvme_io": false, 00:08:40.602 "nvme_io_md": false, 00:08:40.602 "write_zeroes": true, 00:08:40.602 "zcopy": true, 00:08:40.602 "get_zone_info": false, 00:08:40.602 "zone_management": false, 00:08:40.602 "zone_append": false, 00:08:40.602 "compare": false, 00:08:40.602 "compare_and_write": false, 00:08:40.602 "abort": true, 00:08:40.602 "seek_hole": false, 00:08:40.602 "seek_data": false, 00:08:40.602 "copy": true, 00:08:40.602 "nvme_iov_md": false 00:08:40.602 }, 00:08:40.602 "memory_domains": [ 00:08:40.602 { 00:08:40.602 "dma_device_id": "system", 00:08:40.602 "dma_device_type": 1 00:08:40.602 }, 00:08:40.602 { 00:08:40.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.602 "dma_device_type": 2 00:08:40.602 } 00:08:40.602 ], 00:08:40.602 "driver_specific": {} 00:08:40.602 } 00:08:40.602 ] 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.602 "name": "Existed_Raid", 00:08:40.602 "uuid": "422c7b8b-1edc-4219-a56b-b719d4e0ec52", 00:08:40.602 "strip_size_kb": 64, 00:08:40.602 "state": "configuring", 00:08:40.602 "raid_level": "raid0", 00:08:40.602 "superblock": true, 00:08:40.602 "num_base_bdevs": 3, 00:08:40.602 "num_base_bdevs_discovered": 2, 00:08:40.602 "num_base_bdevs_operational": 3, 00:08:40.602 "base_bdevs_list": [ 00:08:40.602 { 00:08:40.602 "name": "BaseBdev1", 00:08:40.602 "uuid": "915ec58e-4dc0-4bd4-8bfb-560ea109b677", 00:08:40.602 "is_configured": true, 00:08:40.602 "data_offset": 2048, 00:08:40.602 "data_size": 63488 00:08:40.602 }, 00:08:40.602 { 00:08:40.602 "name": "BaseBdev2", 00:08:40.602 "uuid": "8c9c18af-07cc-4b68-8715-75e977c4443b", 00:08:40.602 "is_configured": true, 00:08:40.602 "data_offset": 2048, 00:08:40.602 "data_size": 63488 00:08:40.602 }, 00:08:40.602 { 00:08:40.602 "name": "BaseBdev3", 00:08:40.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.602 "is_configured": false, 00:08:40.602 "data_offset": 0, 00:08:40.602 "data_size": 0 00:08:40.602 } 00:08:40.602 ] 00:08:40.602 }' 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.602 10:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.168 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.169 [2024-11-15 10:37:02.115874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:41.169 [2024-11-15 10:37:02.116216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:41.169 [2024-11-15 10:37:02.116257] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:41.169 [2024-11-15 10:37:02.116645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:41.169 BaseBdev3 00:08:41.169 [2024-11-15 10:37:02.116851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:41.169 [2024-11-15 10:37:02.116869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:41.169 [2024-11-15 10:37:02.117056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.169 [ 00:08:41.169 { 00:08:41.169 "name": "BaseBdev3", 00:08:41.169 "aliases": [ 00:08:41.169 "bca1b791-5c09-4cb4-8a88-240bc9f8badb" 00:08:41.169 ], 00:08:41.169 "product_name": "Malloc disk", 00:08:41.169 "block_size": 512, 00:08:41.169 "num_blocks": 65536, 00:08:41.169 "uuid": "bca1b791-5c09-4cb4-8a88-240bc9f8badb", 00:08:41.169 "assigned_rate_limits": { 00:08:41.169 "rw_ios_per_sec": 0, 00:08:41.169 "rw_mbytes_per_sec": 0, 00:08:41.169 "r_mbytes_per_sec": 0, 00:08:41.169 "w_mbytes_per_sec": 0 00:08:41.169 }, 00:08:41.169 "claimed": true, 00:08:41.169 "claim_type": "exclusive_write", 00:08:41.169 "zoned": false, 00:08:41.169 "supported_io_types": { 00:08:41.169 "read": true, 00:08:41.169 "write": true, 00:08:41.169 "unmap": true, 00:08:41.169 "flush": true, 00:08:41.169 "reset": true, 00:08:41.169 "nvme_admin": false, 00:08:41.169 "nvme_io": false, 00:08:41.169 "nvme_io_md": false, 00:08:41.169 "write_zeroes": true, 00:08:41.169 "zcopy": true, 00:08:41.169 "get_zone_info": false, 00:08:41.169 "zone_management": false, 00:08:41.169 "zone_append": false, 00:08:41.169 "compare": false, 00:08:41.169 "compare_and_write": false, 00:08:41.169 "abort": true, 00:08:41.169 "seek_hole": false, 00:08:41.169 "seek_data": false, 00:08:41.169 "copy": true, 00:08:41.169 "nvme_iov_md": false 00:08:41.169 }, 00:08:41.169 "memory_domains": [ 00:08:41.169 { 00:08:41.169 "dma_device_id": "system", 00:08:41.169 "dma_device_type": 1 00:08:41.169 }, 00:08:41.169 { 00:08:41.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.169 "dma_device_type": 2 00:08:41.169 } 00:08:41.169 ], 00:08:41.169 "driver_specific": {} 00:08:41.169 } 00:08:41.169 ] 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.169 "name": "Existed_Raid", 00:08:41.169 "uuid": "422c7b8b-1edc-4219-a56b-b719d4e0ec52", 00:08:41.169 "strip_size_kb": 64, 00:08:41.169 "state": "online", 00:08:41.169 "raid_level": "raid0", 00:08:41.169 "superblock": true, 00:08:41.169 "num_base_bdevs": 3, 00:08:41.169 "num_base_bdevs_discovered": 3, 00:08:41.169 "num_base_bdevs_operational": 3, 00:08:41.169 "base_bdevs_list": [ 00:08:41.169 { 00:08:41.169 "name": "BaseBdev1", 00:08:41.169 "uuid": "915ec58e-4dc0-4bd4-8bfb-560ea109b677", 00:08:41.169 "is_configured": true, 00:08:41.169 "data_offset": 2048, 00:08:41.169 "data_size": 63488 00:08:41.169 }, 00:08:41.169 { 00:08:41.169 "name": "BaseBdev2", 00:08:41.169 "uuid": "8c9c18af-07cc-4b68-8715-75e977c4443b", 00:08:41.169 "is_configured": true, 00:08:41.169 "data_offset": 2048, 00:08:41.169 "data_size": 63488 00:08:41.169 }, 00:08:41.169 { 00:08:41.169 "name": "BaseBdev3", 00:08:41.169 "uuid": "bca1b791-5c09-4cb4-8a88-240bc9f8badb", 00:08:41.169 "is_configured": true, 00:08:41.169 "data_offset": 2048, 00:08:41.169 "data_size": 63488 00:08:41.169 } 00:08:41.169 ] 00:08:41.169 }' 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.169 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.735 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:41.735 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.736 [2024-11-15 10:37:02.648457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.736 "name": "Existed_Raid", 00:08:41.736 "aliases": [ 00:08:41.736 "422c7b8b-1edc-4219-a56b-b719d4e0ec52" 00:08:41.736 ], 00:08:41.736 "product_name": "Raid Volume", 00:08:41.736 "block_size": 512, 00:08:41.736 "num_blocks": 190464, 00:08:41.736 "uuid": "422c7b8b-1edc-4219-a56b-b719d4e0ec52", 00:08:41.736 "assigned_rate_limits": { 00:08:41.736 "rw_ios_per_sec": 0, 00:08:41.736 "rw_mbytes_per_sec": 0, 00:08:41.736 "r_mbytes_per_sec": 0, 00:08:41.736 "w_mbytes_per_sec": 0 00:08:41.736 }, 00:08:41.736 "claimed": false, 00:08:41.736 "zoned": false, 00:08:41.736 "supported_io_types": { 00:08:41.736 "read": true, 00:08:41.736 "write": true, 00:08:41.736 "unmap": true, 00:08:41.736 "flush": true, 00:08:41.736 "reset": true, 00:08:41.736 "nvme_admin": false, 00:08:41.736 "nvme_io": false, 00:08:41.736 "nvme_io_md": false, 00:08:41.736 "write_zeroes": true, 00:08:41.736 "zcopy": false, 00:08:41.736 "get_zone_info": false, 00:08:41.736 "zone_management": false, 00:08:41.736 "zone_append": false, 00:08:41.736 "compare": false, 00:08:41.736 "compare_and_write": false, 00:08:41.736 "abort": false, 00:08:41.736 "seek_hole": false, 00:08:41.736 "seek_data": false, 00:08:41.736 "copy": false, 00:08:41.736 "nvme_iov_md": false 00:08:41.736 }, 00:08:41.736 "memory_domains": [ 00:08:41.736 { 00:08:41.736 "dma_device_id": "system", 00:08:41.736 "dma_device_type": 1 00:08:41.736 }, 00:08:41.736 { 00:08:41.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.736 "dma_device_type": 2 00:08:41.736 }, 00:08:41.736 { 00:08:41.736 "dma_device_id": "system", 00:08:41.736 "dma_device_type": 1 00:08:41.736 }, 00:08:41.736 { 00:08:41.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.736 "dma_device_type": 2 00:08:41.736 }, 00:08:41.736 { 00:08:41.736 "dma_device_id": "system", 00:08:41.736 "dma_device_type": 1 00:08:41.736 }, 00:08:41.736 { 00:08:41.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.736 "dma_device_type": 2 00:08:41.736 } 00:08:41.736 ], 00:08:41.736 "driver_specific": { 00:08:41.736 "raid": { 00:08:41.736 "uuid": "422c7b8b-1edc-4219-a56b-b719d4e0ec52", 00:08:41.736 "strip_size_kb": 64, 00:08:41.736 "state": "online", 00:08:41.736 "raid_level": "raid0", 00:08:41.736 "superblock": true, 00:08:41.736 "num_base_bdevs": 3, 00:08:41.736 "num_base_bdevs_discovered": 3, 00:08:41.736 "num_base_bdevs_operational": 3, 00:08:41.736 "base_bdevs_list": [ 00:08:41.736 { 00:08:41.736 "name": "BaseBdev1", 00:08:41.736 "uuid": "915ec58e-4dc0-4bd4-8bfb-560ea109b677", 00:08:41.736 "is_configured": true, 00:08:41.736 "data_offset": 2048, 00:08:41.736 "data_size": 63488 00:08:41.736 }, 00:08:41.736 { 00:08:41.736 "name": "BaseBdev2", 00:08:41.736 "uuid": "8c9c18af-07cc-4b68-8715-75e977c4443b", 00:08:41.736 "is_configured": true, 00:08:41.736 "data_offset": 2048, 00:08:41.736 "data_size": 63488 00:08:41.736 }, 00:08:41.736 { 00:08:41.736 "name": "BaseBdev3", 00:08:41.736 "uuid": "bca1b791-5c09-4cb4-8a88-240bc9f8badb", 00:08:41.736 "is_configured": true, 00:08:41.736 "data_offset": 2048, 00:08:41.736 "data_size": 63488 00:08:41.736 } 00:08:41.736 ] 00:08:41.736 } 00:08:41.736 } 00:08:41.736 }' 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:41.736 BaseBdev2 00:08:41.736 BaseBdev3' 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.736 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.994 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.994 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.994 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.994 10:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:41.994 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.994 10:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.994 [2024-11-15 10:37:02.940207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:41.994 [2024-11-15 10:37:02.940248] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.994 [2024-11-15 10:37:02.940319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.994 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.994 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:41.994 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:41.994 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.994 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:41.994 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:41.994 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:41.994 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.995 "name": "Existed_Raid", 00:08:41.995 "uuid": "422c7b8b-1edc-4219-a56b-b719d4e0ec52", 00:08:41.995 "strip_size_kb": 64, 00:08:41.995 "state": "offline", 00:08:41.995 "raid_level": "raid0", 00:08:41.995 "superblock": true, 00:08:41.995 "num_base_bdevs": 3, 00:08:41.995 "num_base_bdevs_discovered": 2, 00:08:41.995 "num_base_bdevs_operational": 2, 00:08:41.995 "base_bdevs_list": [ 00:08:41.995 { 00:08:41.995 "name": null, 00:08:41.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.995 "is_configured": false, 00:08:41.995 "data_offset": 0, 00:08:41.995 "data_size": 63488 00:08:41.995 }, 00:08:41.995 { 00:08:41.995 "name": "BaseBdev2", 00:08:41.995 "uuid": "8c9c18af-07cc-4b68-8715-75e977c4443b", 00:08:41.995 "is_configured": true, 00:08:41.995 "data_offset": 2048, 00:08:41.995 "data_size": 63488 00:08:41.995 }, 00:08:41.995 { 00:08:41.995 "name": "BaseBdev3", 00:08:41.995 "uuid": "bca1b791-5c09-4cb4-8a88-240bc9f8badb", 00:08:41.995 "is_configured": true, 00:08:41.995 "data_offset": 2048, 00:08:41.995 "data_size": 63488 00:08:41.995 } 00:08:41.995 ] 00:08:41.995 }' 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.995 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.561 [2024-11-15 10:37:03.568495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.561 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.561 [2024-11-15 10:37:03.716325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:42.561 [2024-11-15 10:37:03.716392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.819 BaseBdev2 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.819 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.819 [ 00:08:42.819 { 00:08:42.819 "name": "BaseBdev2", 00:08:42.819 "aliases": [ 00:08:42.819 "6c24460a-efa6-4ea9-915a-12897cb52d6a" 00:08:42.819 ], 00:08:42.819 "product_name": "Malloc disk", 00:08:42.819 "block_size": 512, 00:08:42.819 "num_blocks": 65536, 00:08:42.819 "uuid": "6c24460a-efa6-4ea9-915a-12897cb52d6a", 00:08:42.819 "assigned_rate_limits": { 00:08:42.819 "rw_ios_per_sec": 0, 00:08:42.819 "rw_mbytes_per_sec": 0, 00:08:42.819 "r_mbytes_per_sec": 0, 00:08:42.819 "w_mbytes_per_sec": 0 00:08:42.819 }, 00:08:42.819 "claimed": false, 00:08:42.819 "zoned": false, 00:08:42.819 "supported_io_types": { 00:08:42.819 "read": true, 00:08:42.819 "write": true, 00:08:42.819 "unmap": true, 00:08:42.819 "flush": true, 00:08:42.819 "reset": true, 00:08:42.819 "nvme_admin": false, 00:08:42.819 "nvme_io": false, 00:08:42.819 "nvme_io_md": false, 00:08:42.819 "write_zeroes": true, 00:08:42.819 "zcopy": true, 00:08:42.819 "get_zone_info": false, 00:08:42.819 "zone_management": false, 00:08:42.819 "zone_append": false, 00:08:42.819 "compare": false, 00:08:42.819 "compare_and_write": false, 00:08:42.819 "abort": true, 00:08:42.819 "seek_hole": false, 00:08:42.819 "seek_data": false, 00:08:42.819 "copy": true, 00:08:42.819 "nvme_iov_md": false 00:08:42.819 }, 00:08:42.820 "memory_domains": [ 00:08:42.820 { 00:08:42.820 "dma_device_id": "system", 00:08:42.820 "dma_device_type": 1 00:08:42.820 }, 00:08:42.820 { 00:08:42.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.820 "dma_device_type": 2 00:08:42.820 } 00:08:42.820 ], 00:08:42.820 "driver_specific": {} 00:08:42.820 } 00:08:42.820 ] 00:08:42.820 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.820 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:42.820 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:42.820 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:42.820 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:42.820 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.820 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.820 BaseBdev3 00:08:43.077 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.077 10:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:43.077 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:43.077 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.077 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:43.077 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.077 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.077 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.077 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.077 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.077 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.077 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:43.077 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.077 10:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.077 [ 00:08:43.077 { 00:08:43.077 "name": "BaseBdev3", 00:08:43.077 "aliases": [ 00:08:43.077 "9333623c-bb72-44cd-a4ed-cff23db5db3a" 00:08:43.077 ], 00:08:43.077 "product_name": "Malloc disk", 00:08:43.077 "block_size": 512, 00:08:43.077 "num_blocks": 65536, 00:08:43.077 "uuid": "9333623c-bb72-44cd-a4ed-cff23db5db3a", 00:08:43.077 "assigned_rate_limits": { 00:08:43.077 "rw_ios_per_sec": 0, 00:08:43.077 "rw_mbytes_per_sec": 0, 00:08:43.077 "r_mbytes_per_sec": 0, 00:08:43.077 "w_mbytes_per_sec": 0 00:08:43.077 }, 00:08:43.077 "claimed": false, 00:08:43.077 "zoned": false, 00:08:43.077 "supported_io_types": { 00:08:43.077 "read": true, 00:08:43.077 "write": true, 00:08:43.077 "unmap": true, 00:08:43.077 "flush": true, 00:08:43.077 "reset": true, 00:08:43.077 "nvme_admin": false, 00:08:43.077 "nvme_io": false, 00:08:43.077 "nvme_io_md": false, 00:08:43.077 "write_zeroes": true, 00:08:43.077 "zcopy": true, 00:08:43.077 "get_zone_info": false, 00:08:43.077 "zone_management": false, 00:08:43.077 "zone_append": false, 00:08:43.077 "compare": false, 00:08:43.077 "compare_and_write": false, 00:08:43.077 "abort": true, 00:08:43.077 "seek_hole": false, 00:08:43.077 "seek_data": false, 00:08:43.077 "copy": true, 00:08:43.077 "nvme_iov_md": false 00:08:43.077 }, 00:08:43.077 "memory_domains": [ 00:08:43.077 { 00:08:43.077 "dma_device_id": "system", 00:08:43.077 "dma_device_type": 1 00:08:43.077 }, 00:08:43.077 { 00:08:43.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.078 "dma_device_type": 2 00:08:43.078 } 00:08:43.078 ], 00:08:43.078 "driver_specific": {} 00:08:43.078 } 00:08:43.078 ] 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.078 [2024-11-15 10:37:04.013346] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.078 [2024-11-15 10:37:04.013401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.078 [2024-11-15 10:37:04.013435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.078 [2024-11-15 10:37:04.015822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.078 "name": "Existed_Raid", 00:08:43.078 "uuid": "5d59ab8c-072e-4c0b-a7a1-b48929d0ac9e", 00:08:43.078 "strip_size_kb": 64, 00:08:43.078 "state": "configuring", 00:08:43.078 "raid_level": "raid0", 00:08:43.078 "superblock": true, 00:08:43.078 "num_base_bdevs": 3, 00:08:43.078 "num_base_bdevs_discovered": 2, 00:08:43.078 "num_base_bdevs_operational": 3, 00:08:43.078 "base_bdevs_list": [ 00:08:43.078 { 00:08:43.078 "name": "BaseBdev1", 00:08:43.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.078 "is_configured": false, 00:08:43.078 "data_offset": 0, 00:08:43.078 "data_size": 0 00:08:43.078 }, 00:08:43.078 { 00:08:43.078 "name": "BaseBdev2", 00:08:43.078 "uuid": "6c24460a-efa6-4ea9-915a-12897cb52d6a", 00:08:43.078 "is_configured": true, 00:08:43.078 "data_offset": 2048, 00:08:43.078 "data_size": 63488 00:08:43.078 }, 00:08:43.078 { 00:08:43.078 "name": "BaseBdev3", 00:08:43.078 "uuid": "9333623c-bb72-44cd-a4ed-cff23db5db3a", 00:08:43.078 "is_configured": true, 00:08:43.078 "data_offset": 2048, 00:08:43.078 "data_size": 63488 00:08:43.078 } 00:08:43.078 ] 00:08:43.078 }' 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.078 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.643 [2024-11-15 10:37:04.505480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.643 "name": "Existed_Raid", 00:08:43.643 "uuid": "5d59ab8c-072e-4c0b-a7a1-b48929d0ac9e", 00:08:43.643 "strip_size_kb": 64, 00:08:43.643 "state": "configuring", 00:08:43.643 "raid_level": "raid0", 00:08:43.643 "superblock": true, 00:08:43.643 "num_base_bdevs": 3, 00:08:43.643 "num_base_bdevs_discovered": 1, 00:08:43.643 "num_base_bdevs_operational": 3, 00:08:43.643 "base_bdevs_list": [ 00:08:43.643 { 00:08:43.643 "name": "BaseBdev1", 00:08:43.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.643 "is_configured": false, 00:08:43.643 "data_offset": 0, 00:08:43.643 "data_size": 0 00:08:43.643 }, 00:08:43.643 { 00:08:43.643 "name": null, 00:08:43.643 "uuid": "6c24460a-efa6-4ea9-915a-12897cb52d6a", 00:08:43.643 "is_configured": false, 00:08:43.643 "data_offset": 0, 00:08:43.643 "data_size": 63488 00:08:43.643 }, 00:08:43.643 { 00:08:43.643 "name": "BaseBdev3", 00:08:43.643 "uuid": "9333623c-bb72-44cd-a4ed-cff23db5db3a", 00:08:43.643 "is_configured": true, 00:08:43.643 "data_offset": 2048, 00:08:43.643 "data_size": 63488 00:08:43.643 } 00:08:43.643 ] 00:08:43.643 }' 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.643 10:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.900 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.900 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.900 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.900 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:43.900 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.159 [2024-11-15 10:37:05.123887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.159 BaseBdev1 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.159 [ 00:08:44.159 { 00:08:44.159 "name": "BaseBdev1", 00:08:44.159 "aliases": [ 00:08:44.159 "6a464adc-372b-4bd5-afc8-acd3c5db74b6" 00:08:44.159 ], 00:08:44.159 "product_name": "Malloc disk", 00:08:44.159 "block_size": 512, 00:08:44.159 "num_blocks": 65536, 00:08:44.159 "uuid": "6a464adc-372b-4bd5-afc8-acd3c5db74b6", 00:08:44.159 "assigned_rate_limits": { 00:08:44.159 "rw_ios_per_sec": 0, 00:08:44.159 "rw_mbytes_per_sec": 0, 00:08:44.159 "r_mbytes_per_sec": 0, 00:08:44.159 "w_mbytes_per_sec": 0 00:08:44.159 }, 00:08:44.159 "claimed": true, 00:08:44.159 "claim_type": "exclusive_write", 00:08:44.159 "zoned": false, 00:08:44.159 "supported_io_types": { 00:08:44.159 "read": true, 00:08:44.159 "write": true, 00:08:44.159 "unmap": true, 00:08:44.159 "flush": true, 00:08:44.159 "reset": true, 00:08:44.159 "nvme_admin": false, 00:08:44.159 "nvme_io": false, 00:08:44.159 "nvme_io_md": false, 00:08:44.159 "write_zeroes": true, 00:08:44.159 "zcopy": true, 00:08:44.159 "get_zone_info": false, 00:08:44.159 "zone_management": false, 00:08:44.159 "zone_append": false, 00:08:44.159 "compare": false, 00:08:44.159 "compare_and_write": false, 00:08:44.159 "abort": true, 00:08:44.159 "seek_hole": false, 00:08:44.159 "seek_data": false, 00:08:44.159 "copy": true, 00:08:44.159 "nvme_iov_md": false 00:08:44.159 }, 00:08:44.159 "memory_domains": [ 00:08:44.159 { 00:08:44.159 "dma_device_id": "system", 00:08:44.159 "dma_device_type": 1 00:08:44.159 }, 00:08:44.159 { 00:08:44.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.159 "dma_device_type": 2 00:08:44.159 } 00:08:44.159 ], 00:08:44.159 "driver_specific": {} 00:08:44.159 } 00:08:44.159 ] 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.159 "name": "Existed_Raid", 00:08:44.159 "uuid": "5d59ab8c-072e-4c0b-a7a1-b48929d0ac9e", 00:08:44.159 "strip_size_kb": 64, 00:08:44.159 "state": "configuring", 00:08:44.159 "raid_level": "raid0", 00:08:44.159 "superblock": true, 00:08:44.159 "num_base_bdevs": 3, 00:08:44.159 "num_base_bdevs_discovered": 2, 00:08:44.159 "num_base_bdevs_operational": 3, 00:08:44.159 "base_bdevs_list": [ 00:08:44.159 { 00:08:44.159 "name": "BaseBdev1", 00:08:44.159 "uuid": "6a464adc-372b-4bd5-afc8-acd3c5db74b6", 00:08:44.159 "is_configured": true, 00:08:44.159 "data_offset": 2048, 00:08:44.159 "data_size": 63488 00:08:44.159 }, 00:08:44.159 { 00:08:44.159 "name": null, 00:08:44.159 "uuid": "6c24460a-efa6-4ea9-915a-12897cb52d6a", 00:08:44.159 "is_configured": false, 00:08:44.159 "data_offset": 0, 00:08:44.159 "data_size": 63488 00:08:44.159 }, 00:08:44.159 { 00:08:44.159 "name": "BaseBdev3", 00:08:44.159 "uuid": "9333623c-bb72-44cd-a4ed-cff23db5db3a", 00:08:44.159 "is_configured": true, 00:08:44.159 "data_offset": 2048, 00:08:44.159 "data_size": 63488 00:08:44.159 } 00:08:44.159 ] 00:08:44.159 }' 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.159 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.725 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.725 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:44.725 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.725 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.725 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.725 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:44.725 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:44.725 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.725 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.725 [2024-11-15 10:37:05.748106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:44.725 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.725 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.725 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.726 "name": "Existed_Raid", 00:08:44.726 "uuid": "5d59ab8c-072e-4c0b-a7a1-b48929d0ac9e", 00:08:44.726 "strip_size_kb": 64, 00:08:44.726 "state": "configuring", 00:08:44.726 "raid_level": "raid0", 00:08:44.726 "superblock": true, 00:08:44.726 "num_base_bdevs": 3, 00:08:44.726 "num_base_bdevs_discovered": 1, 00:08:44.726 "num_base_bdevs_operational": 3, 00:08:44.726 "base_bdevs_list": [ 00:08:44.726 { 00:08:44.726 "name": "BaseBdev1", 00:08:44.726 "uuid": "6a464adc-372b-4bd5-afc8-acd3c5db74b6", 00:08:44.726 "is_configured": true, 00:08:44.726 "data_offset": 2048, 00:08:44.726 "data_size": 63488 00:08:44.726 }, 00:08:44.726 { 00:08:44.726 "name": null, 00:08:44.726 "uuid": "6c24460a-efa6-4ea9-915a-12897cb52d6a", 00:08:44.726 "is_configured": false, 00:08:44.726 "data_offset": 0, 00:08:44.726 "data_size": 63488 00:08:44.726 }, 00:08:44.726 { 00:08:44.726 "name": null, 00:08:44.726 "uuid": "9333623c-bb72-44cd-a4ed-cff23db5db3a", 00:08:44.726 "is_configured": false, 00:08:44.726 "data_offset": 0, 00:08:44.726 "data_size": 63488 00:08:44.726 } 00:08:44.726 ] 00:08:44.726 }' 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.726 10:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.294 [2024-11-15 10:37:06.304284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.294 "name": "Existed_Raid", 00:08:45.294 "uuid": "5d59ab8c-072e-4c0b-a7a1-b48929d0ac9e", 00:08:45.294 "strip_size_kb": 64, 00:08:45.294 "state": "configuring", 00:08:45.294 "raid_level": "raid0", 00:08:45.294 "superblock": true, 00:08:45.294 "num_base_bdevs": 3, 00:08:45.294 "num_base_bdevs_discovered": 2, 00:08:45.294 "num_base_bdevs_operational": 3, 00:08:45.294 "base_bdevs_list": [ 00:08:45.294 { 00:08:45.294 "name": "BaseBdev1", 00:08:45.294 "uuid": "6a464adc-372b-4bd5-afc8-acd3c5db74b6", 00:08:45.294 "is_configured": true, 00:08:45.294 "data_offset": 2048, 00:08:45.294 "data_size": 63488 00:08:45.294 }, 00:08:45.294 { 00:08:45.294 "name": null, 00:08:45.294 "uuid": "6c24460a-efa6-4ea9-915a-12897cb52d6a", 00:08:45.294 "is_configured": false, 00:08:45.294 "data_offset": 0, 00:08:45.294 "data_size": 63488 00:08:45.294 }, 00:08:45.294 { 00:08:45.294 "name": "BaseBdev3", 00:08:45.294 "uuid": "9333623c-bb72-44cd-a4ed-cff23db5db3a", 00:08:45.294 "is_configured": true, 00:08:45.294 "data_offset": 2048, 00:08:45.294 "data_size": 63488 00:08:45.294 } 00:08:45.294 ] 00:08:45.294 }' 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.294 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.860 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:45.860 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.860 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.860 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.860 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.860 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.861 [2024-11-15 10:37:06.868466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.861 10:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.861 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.861 "name": "Existed_Raid", 00:08:45.861 "uuid": "5d59ab8c-072e-4c0b-a7a1-b48929d0ac9e", 00:08:45.861 "strip_size_kb": 64, 00:08:45.861 "state": "configuring", 00:08:45.861 "raid_level": "raid0", 00:08:45.861 "superblock": true, 00:08:45.861 "num_base_bdevs": 3, 00:08:45.861 "num_base_bdevs_discovered": 1, 00:08:45.861 "num_base_bdevs_operational": 3, 00:08:45.861 "base_bdevs_list": [ 00:08:45.861 { 00:08:45.861 "name": null, 00:08:45.861 "uuid": "6a464adc-372b-4bd5-afc8-acd3c5db74b6", 00:08:45.861 "is_configured": false, 00:08:45.861 "data_offset": 0, 00:08:45.861 "data_size": 63488 00:08:45.861 }, 00:08:45.861 { 00:08:45.861 "name": null, 00:08:45.861 "uuid": "6c24460a-efa6-4ea9-915a-12897cb52d6a", 00:08:45.861 "is_configured": false, 00:08:45.861 "data_offset": 0, 00:08:45.861 "data_size": 63488 00:08:45.861 }, 00:08:45.861 { 00:08:45.861 "name": "BaseBdev3", 00:08:45.861 "uuid": "9333623c-bb72-44cd-a4ed-cff23db5db3a", 00:08:45.861 "is_configured": true, 00:08:45.861 "data_offset": 2048, 00:08:45.861 "data_size": 63488 00:08:45.861 } 00:08:45.861 ] 00:08:45.861 }' 00:08:45.861 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.861 10:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.426 [2024-11-15 10:37:07.518141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.426 "name": "Existed_Raid", 00:08:46.426 "uuid": "5d59ab8c-072e-4c0b-a7a1-b48929d0ac9e", 00:08:46.426 "strip_size_kb": 64, 00:08:46.426 "state": "configuring", 00:08:46.426 "raid_level": "raid0", 00:08:46.426 "superblock": true, 00:08:46.426 "num_base_bdevs": 3, 00:08:46.426 "num_base_bdevs_discovered": 2, 00:08:46.426 "num_base_bdevs_operational": 3, 00:08:46.426 "base_bdevs_list": [ 00:08:46.426 { 00:08:46.426 "name": null, 00:08:46.426 "uuid": "6a464adc-372b-4bd5-afc8-acd3c5db74b6", 00:08:46.426 "is_configured": false, 00:08:46.426 "data_offset": 0, 00:08:46.426 "data_size": 63488 00:08:46.426 }, 00:08:46.426 { 00:08:46.426 "name": "BaseBdev2", 00:08:46.426 "uuid": "6c24460a-efa6-4ea9-915a-12897cb52d6a", 00:08:46.426 "is_configured": true, 00:08:46.426 "data_offset": 2048, 00:08:46.426 "data_size": 63488 00:08:46.426 }, 00:08:46.426 { 00:08:46.426 "name": "BaseBdev3", 00:08:46.426 "uuid": "9333623c-bb72-44cd-a4ed-cff23db5db3a", 00:08:46.426 "is_configured": true, 00:08:46.426 "data_offset": 2048, 00:08:46.426 "data_size": 63488 00:08:46.426 } 00:08:46.426 ] 00:08:46.426 }' 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.426 10:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.992 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:46.992 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.992 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.992 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.992 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.992 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:46.992 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:46.992 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.992 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.992 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.992 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.992 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6a464adc-372b-4bd5-afc8-acd3c5db74b6 00:08:46.992 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.992 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.250 [2024-11-15 10:37:08.176412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:47.250 [2024-11-15 10:37:08.176700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:47.250 [2024-11-15 10:37:08.176725] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:47.250 [2024-11-15 10:37:08.177055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:47.250 NewBaseBdev 00:08:47.250 [2024-11-15 10:37:08.177256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:47.250 [2024-11-15 10:37:08.177274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:47.250 [2024-11-15 10:37:08.177442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.250 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.251 [ 00:08:47.251 { 00:08:47.251 "name": "NewBaseBdev", 00:08:47.251 "aliases": [ 00:08:47.251 "6a464adc-372b-4bd5-afc8-acd3c5db74b6" 00:08:47.251 ], 00:08:47.251 "product_name": "Malloc disk", 00:08:47.251 "block_size": 512, 00:08:47.251 "num_blocks": 65536, 00:08:47.251 "uuid": "6a464adc-372b-4bd5-afc8-acd3c5db74b6", 00:08:47.251 "assigned_rate_limits": { 00:08:47.251 "rw_ios_per_sec": 0, 00:08:47.251 "rw_mbytes_per_sec": 0, 00:08:47.251 "r_mbytes_per_sec": 0, 00:08:47.251 "w_mbytes_per_sec": 0 00:08:47.251 }, 00:08:47.251 "claimed": true, 00:08:47.251 "claim_type": "exclusive_write", 00:08:47.251 "zoned": false, 00:08:47.251 "supported_io_types": { 00:08:47.251 "read": true, 00:08:47.251 "write": true, 00:08:47.251 "unmap": true, 00:08:47.251 "flush": true, 00:08:47.251 "reset": true, 00:08:47.251 "nvme_admin": false, 00:08:47.251 "nvme_io": false, 00:08:47.251 "nvme_io_md": false, 00:08:47.251 "write_zeroes": true, 00:08:47.251 "zcopy": true, 00:08:47.251 "get_zone_info": false, 00:08:47.251 "zone_management": false, 00:08:47.251 "zone_append": false, 00:08:47.251 "compare": false, 00:08:47.251 "compare_and_write": false, 00:08:47.251 "abort": true, 00:08:47.251 "seek_hole": false, 00:08:47.251 "seek_data": false, 00:08:47.251 "copy": true, 00:08:47.251 "nvme_iov_md": false 00:08:47.251 }, 00:08:47.251 "memory_domains": [ 00:08:47.251 { 00:08:47.251 "dma_device_id": "system", 00:08:47.251 "dma_device_type": 1 00:08:47.251 }, 00:08:47.251 { 00:08:47.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.251 "dma_device_type": 2 00:08:47.251 } 00:08:47.251 ], 00:08:47.251 "driver_specific": {} 00:08:47.251 } 00:08:47.251 ] 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.251 "name": "Existed_Raid", 00:08:47.251 "uuid": "5d59ab8c-072e-4c0b-a7a1-b48929d0ac9e", 00:08:47.251 "strip_size_kb": 64, 00:08:47.251 "state": "online", 00:08:47.251 "raid_level": "raid0", 00:08:47.251 "superblock": true, 00:08:47.251 "num_base_bdevs": 3, 00:08:47.251 "num_base_bdevs_discovered": 3, 00:08:47.251 "num_base_bdevs_operational": 3, 00:08:47.251 "base_bdevs_list": [ 00:08:47.251 { 00:08:47.251 "name": "NewBaseBdev", 00:08:47.251 "uuid": "6a464adc-372b-4bd5-afc8-acd3c5db74b6", 00:08:47.251 "is_configured": true, 00:08:47.251 "data_offset": 2048, 00:08:47.251 "data_size": 63488 00:08:47.251 }, 00:08:47.251 { 00:08:47.251 "name": "BaseBdev2", 00:08:47.251 "uuid": "6c24460a-efa6-4ea9-915a-12897cb52d6a", 00:08:47.251 "is_configured": true, 00:08:47.251 "data_offset": 2048, 00:08:47.251 "data_size": 63488 00:08:47.251 }, 00:08:47.251 { 00:08:47.251 "name": "BaseBdev3", 00:08:47.251 "uuid": "9333623c-bb72-44cd-a4ed-cff23db5db3a", 00:08:47.251 "is_configured": true, 00:08:47.251 "data_offset": 2048, 00:08:47.251 "data_size": 63488 00:08:47.251 } 00:08:47.251 ] 00:08:47.251 }' 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.251 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.817 [2024-11-15 10:37:08.717006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:47.817 "name": "Existed_Raid", 00:08:47.817 "aliases": [ 00:08:47.817 "5d59ab8c-072e-4c0b-a7a1-b48929d0ac9e" 00:08:47.817 ], 00:08:47.817 "product_name": "Raid Volume", 00:08:47.817 "block_size": 512, 00:08:47.817 "num_blocks": 190464, 00:08:47.817 "uuid": "5d59ab8c-072e-4c0b-a7a1-b48929d0ac9e", 00:08:47.817 "assigned_rate_limits": { 00:08:47.817 "rw_ios_per_sec": 0, 00:08:47.817 "rw_mbytes_per_sec": 0, 00:08:47.817 "r_mbytes_per_sec": 0, 00:08:47.817 "w_mbytes_per_sec": 0 00:08:47.817 }, 00:08:47.817 "claimed": false, 00:08:47.817 "zoned": false, 00:08:47.817 "supported_io_types": { 00:08:47.817 "read": true, 00:08:47.817 "write": true, 00:08:47.817 "unmap": true, 00:08:47.817 "flush": true, 00:08:47.817 "reset": true, 00:08:47.817 "nvme_admin": false, 00:08:47.817 "nvme_io": false, 00:08:47.817 "nvme_io_md": false, 00:08:47.817 "write_zeroes": true, 00:08:47.817 "zcopy": false, 00:08:47.817 "get_zone_info": false, 00:08:47.817 "zone_management": false, 00:08:47.817 "zone_append": false, 00:08:47.817 "compare": false, 00:08:47.817 "compare_and_write": false, 00:08:47.817 "abort": false, 00:08:47.817 "seek_hole": false, 00:08:47.817 "seek_data": false, 00:08:47.817 "copy": false, 00:08:47.817 "nvme_iov_md": false 00:08:47.817 }, 00:08:47.817 "memory_domains": [ 00:08:47.817 { 00:08:47.817 "dma_device_id": "system", 00:08:47.817 "dma_device_type": 1 00:08:47.817 }, 00:08:47.817 { 00:08:47.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.817 "dma_device_type": 2 00:08:47.817 }, 00:08:47.817 { 00:08:47.817 "dma_device_id": "system", 00:08:47.817 "dma_device_type": 1 00:08:47.817 }, 00:08:47.817 { 00:08:47.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.817 "dma_device_type": 2 00:08:47.817 }, 00:08:47.817 { 00:08:47.817 "dma_device_id": "system", 00:08:47.817 "dma_device_type": 1 00:08:47.817 }, 00:08:47.817 { 00:08:47.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.817 "dma_device_type": 2 00:08:47.817 } 00:08:47.817 ], 00:08:47.817 "driver_specific": { 00:08:47.817 "raid": { 00:08:47.817 "uuid": "5d59ab8c-072e-4c0b-a7a1-b48929d0ac9e", 00:08:47.817 "strip_size_kb": 64, 00:08:47.817 "state": "online", 00:08:47.817 "raid_level": "raid0", 00:08:47.817 "superblock": true, 00:08:47.817 "num_base_bdevs": 3, 00:08:47.817 "num_base_bdevs_discovered": 3, 00:08:47.817 "num_base_bdevs_operational": 3, 00:08:47.817 "base_bdevs_list": [ 00:08:47.817 { 00:08:47.817 "name": "NewBaseBdev", 00:08:47.817 "uuid": "6a464adc-372b-4bd5-afc8-acd3c5db74b6", 00:08:47.817 "is_configured": true, 00:08:47.817 "data_offset": 2048, 00:08:47.817 "data_size": 63488 00:08:47.817 }, 00:08:47.817 { 00:08:47.817 "name": "BaseBdev2", 00:08:47.817 "uuid": "6c24460a-efa6-4ea9-915a-12897cb52d6a", 00:08:47.817 "is_configured": true, 00:08:47.817 "data_offset": 2048, 00:08:47.817 "data_size": 63488 00:08:47.817 }, 00:08:47.817 { 00:08:47.817 "name": "BaseBdev3", 00:08:47.817 "uuid": "9333623c-bb72-44cd-a4ed-cff23db5db3a", 00:08:47.817 "is_configured": true, 00:08:47.817 "data_offset": 2048, 00:08:47.817 "data_size": 63488 00:08:47.817 } 00:08:47.817 ] 00:08:47.817 } 00:08:47.817 } 00:08:47.817 }' 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:47.817 BaseBdev2 00:08:47.817 BaseBdev3' 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.817 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:47.818 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.818 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:47.818 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.818 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.818 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.818 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.818 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.818 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.818 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.818 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:47.818 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.818 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.818 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.818 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.075 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.075 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.075 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.075 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:48.075 10:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.075 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.075 10:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.075 [2024-11-15 10:37:09.048778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.075 [2024-11-15 10:37:09.048847] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:48.075 [2024-11-15 10:37:09.048983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.075 [2024-11-15 10:37:09.049082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.075 [2024-11-15 10:37:09.049109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64426 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64426 ']' 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64426 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64426 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.075 killing process with pid 64426 00:08:48.075 10:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.076 10:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64426' 00:08:48.076 10:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64426 00:08:48.076 [2024-11-15 10:37:09.084981] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:48.076 10:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64426 00:08:48.358 [2024-11-15 10:37:09.375004] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.769 10:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:49.769 00:08:49.769 real 0m11.796s 00:08:49.769 user 0m19.593s 00:08:49.769 sys 0m1.543s 00:08:49.769 10:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.769 10:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.769 ************************************ 00:08:49.769 END TEST raid_state_function_test_sb 00:08:49.769 ************************************ 00:08:49.769 10:37:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:49.769 10:37:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:49.769 10:37:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.769 10:37:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.769 ************************************ 00:08:49.769 START TEST raid_superblock_test 00:08:49.769 ************************************ 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65063 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65063 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65063 ']' 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.769 10:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.769 [2024-11-15 10:37:10.667756] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:08:49.769 [2024-11-15 10:37:10.667945] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65063 ] 00:08:49.770 [2024-11-15 10:37:10.851077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.027 [2024-11-15 10:37:10.997120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.284 [2024-11-15 10:37:11.226614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.284 [2024-11-15 10:37:11.226723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.545 malloc1 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.545 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.545 [2024-11-15 10:37:11.701176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:50.545 [2024-11-15 10:37:11.701251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.545 [2024-11-15 10:37:11.701287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:50.545 [2024-11-15 10:37:11.701303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.545 [2024-11-15 10:37:11.704066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.545 [2024-11-15 10:37:11.704109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:50.803 pt1 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.803 malloc2 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.803 [2024-11-15 10:37:11.753708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:50.803 [2024-11-15 10:37:11.753771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.803 [2024-11-15 10:37:11.753803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:50.803 [2024-11-15 10:37:11.753819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.803 [2024-11-15 10:37:11.756632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.803 [2024-11-15 10:37:11.756675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:50.803 pt2 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.803 malloc3 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.803 [2024-11-15 10:37:11.813838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:50.803 [2024-11-15 10:37:11.813901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.803 [2024-11-15 10:37:11.813935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:50.803 [2024-11-15 10:37:11.813951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.803 [2024-11-15 10:37:11.816834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.803 [2024-11-15 10:37:11.816892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:50.803 pt3 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.803 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.803 [2024-11-15 10:37:11.821887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:50.803 [2024-11-15 10:37:11.824358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:50.803 [2024-11-15 10:37:11.824463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:50.804 [2024-11-15 10:37:11.824735] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:50.804 [2024-11-15 10:37:11.824760] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:50.804 [2024-11-15 10:37:11.825064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:50.804 [2024-11-15 10:37:11.825300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:50.804 [2024-11-15 10:37:11.825327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:50.804 [2024-11-15 10:37:11.825528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.804 "name": "raid_bdev1", 00:08:50.804 "uuid": "d0181442-828d-4928-b6ba-075dced7a1a3", 00:08:50.804 "strip_size_kb": 64, 00:08:50.804 "state": "online", 00:08:50.804 "raid_level": "raid0", 00:08:50.804 "superblock": true, 00:08:50.804 "num_base_bdevs": 3, 00:08:50.804 "num_base_bdevs_discovered": 3, 00:08:50.804 "num_base_bdevs_operational": 3, 00:08:50.804 "base_bdevs_list": [ 00:08:50.804 { 00:08:50.804 "name": "pt1", 00:08:50.804 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.804 "is_configured": true, 00:08:50.804 "data_offset": 2048, 00:08:50.804 "data_size": 63488 00:08:50.804 }, 00:08:50.804 { 00:08:50.804 "name": "pt2", 00:08:50.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.804 "is_configured": true, 00:08:50.804 "data_offset": 2048, 00:08:50.804 "data_size": 63488 00:08:50.804 }, 00:08:50.804 { 00:08:50.804 "name": "pt3", 00:08:50.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:50.804 "is_configured": true, 00:08:50.804 "data_offset": 2048, 00:08:50.804 "data_size": 63488 00:08:50.804 } 00:08:50.804 ] 00:08:50.804 }' 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.804 10:37:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.369 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:51.369 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:51.369 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.369 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.369 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.369 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.370 [2024-11-15 10:37:12.334397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.370 "name": "raid_bdev1", 00:08:51.370 "aliases": [ 00:08:51.370 "d0181442-828d-4928-b6ba-075dced7a1a3" 00:08:51.370 ], 00:08:51.370 "product_name": "Raid Volume", 00:08:51.370 "block_size": 512, 00:08:51.370 "num_blocks": 190464, 00:08:51.370 "uuid": "d0181442-828d-4928-b6ba-075dced7a1a3", 00:08:51.370 "assigned_rate_limits": { 00:08:51.370 "rw_ios_per_sec": 0, 00:08:51.370 "rw_mbytes_per_sec": 0, 00:08:51.370 "r_mbytes_per_sec": 0, 00:08:51.370 "w_mbytes_per_sec": 0 00:08:51.370 }, 00:08:51.370 "claimed": false, 00:08:51.370 "zoned": false, 00:08:51.370 "supported_io_types": { 00:08:51.370 "read": true, 00:08:51.370 "write": true, 00:08:51.370 "unmap": true, 00:08:51.370 "flush": true, 00:08:51.370 "reset": true, 00:08:51.370 "nvme_admin": false, 00:08:51.370 "nvme_io": false, 00:08:51.370 "nvme_io_md": false, 00:08:51.370 "write_zeroes": true, 00:08:51.370 "zcopy": false, 00:08:51.370 "get_zone_info": false, 00:08:51.370 "zone_management": false, 00:08:51.370 "zone_append": false, 00:08:51.370 "compare": false, 00:08:51.370 "compare_and_write": false, 00:08:51.370 "abort": false, 00:08:51.370 "seek_hole": false, 00:08:51.370 "seek_data": false, 00:08:51.370 "copy": false, 00:08:51.370 "nvme_iov_md": false 00:08:51.370 }, 00:08:51.370 "memory_domains": [ 00:08:51.370 { 00:08:51.370 "dma_device_id": "system", 00:08:51.370 "dma_device_type": 1 00:08:51.370 }, 00:08:51.370 { 00:08:51.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.370 "dma_device_type": 2 00:08:51.370 }, 00:08:51.370 { 00:08:51.370 "dma_device_id": "system", 00:08:51.370 "dma_device_type": 1 00:08:51.370 }, 00:08:51.370 { 00:08:51.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.370 "dma_device_type": 2 00:08:51.370 }, 00:08:51.370 { 00:08:51.370 "dma_device_id": "system", 00:08:51.370 "dma_device_type": 1 00:08:51.370 }, 00:08:51.370 { 00:08:51.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.370 "dma_device_type": 2 00:08:51.370 } 00:08:51.370 ], 00:08:51.370 "driver_specific": { 00:08:51.370 "raid": { 00:08:51.370 "uuid": "d0181442-828d-4928-b6ba-075dced7a1a3", 00:08:51.370 "strip_size_kb": 64, 00:08:51.370 "state": "online", 00:08:51.370 "raid_level": "raid0", 00:08:51.370 "superblock": true, 00:08:51.370 "num_base_bdevs": 3, 00:08:51.370 "num_base_bdevs_discovered": 3, 00:08:51.370 "num_base_bdevs_operational": 3, 00:08:51.370 "base_bdevs_list": [ 00:08:51.370 { 00:08:51.370 "name": "pt1", 00:08:51.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.370 "is_configured": true, 00:08:51.370 "data_offset": 2048, 00:08:51.370 "data_size": 63488 00:08:51.370 }, 00:08:51.370 { 00:08:51.370 "name": "pt2", 00:08:51.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.370 "is_configured": true, 00:08:51.370 "data_offset": 2048, 00:08:51.370 "data_size": 63488 00:08:51.370 }, 00:08:51.370 { 00:08:51.370 "name": "pt3", 00:08:51.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:51.370 "is_configured": true, 00:08:51.370 "data_offset": 2048, 00:08:51.370 "data_size": 63488 00:08:51.370 } 00:08:51.370 ] 00:08:51.370 } 00:08:51.370 } 00:08:51.370 }' 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:51.370 pt2 00:08:51.370 pt3' 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.370 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.628 [2024-11-15 10:37:12.638383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d0181442-828d-4928-b6ba-075dced7a1a3 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d0181442-828d-4928-b6ba-075dced7a1a3 ']' 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.628 [2024-11-15 10:37:12.686083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.628 [2024-11-15 10:37:12.686114] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.628 [2024-11-15 10:37:12.686233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.628 [2024-11-15 10:37:12.686313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.628 [2024-11-15 10:37:12.686330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:51.628 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.629 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.886 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.886 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:51.886 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:51.886 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:51.886 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:51.886 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:51.886 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.887 [2024-11-15 10:37:12.834212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:51.887 [2024-11-15 10:37:12.836672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:51.887 [2024-11-15 10:37:12.836749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:51.887 [2024-11-15 10:37:12.836818] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:51.887 [2024-11-15 10:37:12.836882] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:51.887 [2024-11-15 10:37:12.836914] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:51.887 [2024-11-15 10:37:12.836940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.887 [2024-11-15 10:37:12.836964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:51.887 request: 00:08:51.887 { 00:08:51.887 "name": "raid_bdev1", 00:08:51.887 "raid_level": "raid0", 00:08:51.887 "base_bdevs": [ 00:08:51.887 "malloc1", 00:08:51.887 "malloc2", 00:08:51.887 "malloc3" 00:08:51.887 ], 00:08:51.887 "strip_size_kb": 64, 00:08:51.887 "superblock": false, 00:08:51.887 "method": "bdev_raid_create", 00:08:51.887 "req_id": 1 00:08:51.887 } 00:08:51.887 Got JSON-RPC error response 00:08:51.887 response: 00:08:51.887 { 00:08:51.887 "code": -17, 00:08:51.887 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:51.887 } 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.887 [2024-11-15 10:37:12.894149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:51.887 [2024-11-15 10:37:12.894209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.887 [2024-11-15 10:37:12.894238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:51.887 [2024-11-15 10:37:12.894253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.887 [2024-11-15 10:37:12.897109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.887 [2024-11-15 10:37:12.897152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:51.887 [2024-11-15 10:37:12.897248] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:51.887 [2024-11-15 10:37:12.897315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:51.887 pt1 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.887 "name": "raid_bdev1", 00:08:51.887 "uuid": "d0181442-828d-4928-b6ba-075dced7a1a3", 00:08:51.887 "strip_size_kb": 64, 00:08:51.887 "state": "configuring", 00:08:51.887 "raid_level": "raid0", 00:08:51.887 "superblock": true, 00:08:51.887 "num_base_bdevs": 3, 00:08:51.887 "num_base_bdevs_discovered": 1, 00:08:51.887 "num_base_bdevs_operational": 3, 00:08:51.887 "base_bdevs_list": [ 00:08:51.887 { 00:08:51.887 "name": "pt1", 00:08:51.887 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.887 "is_configured": true, 00:08:51.887 "data_offset": 2048, 00:08:51.887 "data_size": 63488 00:08:51.887 }, 00:08:51.887 { 00:08:51.887 "name": null, 00:08:51.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.887 "is_configured": false, 00:08:51.887 "data_offset": 2048, 00:08:51.887 "data_size": 63488 00:08:51.887 }, 00:08:51.887 { 00:08:51.887 "name": null, 00:08:51.887 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:51.887 "is_configured": false, 00:08:51.887 "data_offset": 2048, 00:08:51.887 "data_size": 63488 00:08:51.887 } 00:08:51.887 ] 00:08:51.887 }' 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.887 10:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.451 [2024-11-15 10:37:13.402327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:52.451 [2024-11-15 10:37:13.402403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.451 [2024-11-15 10:37:13.402438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:52.451 [2024-11-15 10:37:13.402453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.451 [2024-11-15 10:37:13.403019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.451 [2024-11-15 10:37:13.403053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:52.451 [2024-11-15 10:37:13.403160] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:52.451 [2024-11-15 10:37:13.403192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:52.451 pt2 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.451 [2024-11-15 10:37:13.414310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.451 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.452 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.452 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.452 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.452 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.452 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.452 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.452 "name": "raid_bdev1", 00:08:52.452 "uuid": "d0181442-828d-4928-b6ba-075dced7a1a3", 00:08:52.452 "strip_size_kb": 64, 00:08:52.452 "state": "configuring", 00:08:52.452 "raid_level": "raid0", 00:08:52.452 "superblock": true, 00:08:52.452 "num_base_bdevs": 3, 00:08:52.452 "num_base_bdevs_discovered": 1, 00:08:52.452 "num_base_bdevs_operational": 3, 00:08:52.452 "base_bdevs_list": [ 00:08:52.452 { 00:08:52.452 "name": "pt1", 00:08:52.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.452 "is_configured": true, 00:08:52.452 "data_offset": 2048, 00:08:52.452 "data_size": 63488 00:08:52.452 }, 00:08:52.452 { 00:08:52.452 "name": null, 00:08:52.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.452 "is_configured": false, 00:08:52.452 "data_offset": 0, 00:08:52.452 "data_size": 63488 00:08:52.452 }, 00:08:52.452 { 00:08:52.452 "name": null, 00:08:52.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:52.452 "is_configured": false, 00:08:52.452 "data_offset": 2048, 00:08:52.452 "data_size": 63488 00:08:52.452 } 00:08:52.452 ] 00:08:52.452 }' 00:08:52.452 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.452 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.018 [2024-11-15 10:37:13.946446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:53.018 [2024-11-15 10:37:13.946540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.018 [2024-11-15 10:37:13.946569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:53.018 [2024-11-15 10:37:13.946588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.018 [2024-11-15 10:37:13.947145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.018 [2024-11-15 10:37:13.947188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:53.018 [2024-11-15 10:37:13.947288] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:53.018 [2024-11-15 10:37:13.947324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:53.018 pt2 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.018 [2024-11-15 10:37:13.954411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:53.018 [2024-11-15 10:37:13.954464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.018 [2024-11-15 10:37:13.954485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:53.018 [2024-11-15 10:37:13.954518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.018 [2024-11-15 10:37:13.954941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.018 [2024-11-15 10:37:13.954987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:53.018 [2024-11-15 10:37:13.955060] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:53.018 [2024-11-15 10:37:13.955091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:53.018 [2024-11-15 10:37:13.955241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:53.018 [2024-11-15 10:37:13.955271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:53.018 [2024-11-15 10:37:13.955586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:53.018 [2024-11-15 10:37:13.955779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:53.018 [2024-11-15 10:37:13.955807] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:53.018 [2024-11-15 10:37:13.955968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.018 pt3 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.018 10:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.018 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.018 "name": "raid_bdev1", 00:08:53.018 "uuid": "d0181442-828d-4928-b6ba-075dced7a1a3", 00:08:53.018 "strip_size_kb": 64, 00:08:53.018 "state": "online", 00:08:53.018 "raid_level": "raid0", 00:08:53.018 "superblock": true, 00:08:53.018 "num_base_bdevs": 3, 00:08:53.018 "num_base_bdevs_discovered": 3, 00:08:53.018 "num_base_bdevs_operational": 3, 00:08:53.018 "base_bdevs_list": [ 00:08:53.018 { 00:08:53.018 "name": "pt1", 00:08:53.018 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.018 "is_configured": true, 00:08:53.018 "data_offset": 2048, 00:08:53.018 "data_size": 63488 00:08:53.018 }, 00:08:53.018 { 00:08:53.018 "name": "pt2", 00:08:53.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.018 "is_configured": true, 00:08:53.018 "data_offset": 2048, 00:08:53.018 "data_size": 63488 00:08:53.018 }, 00:08:53.018 { 00:08:53.018 "name": "pt3", 00:08:53.018 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:53.018 "is_configured": true, 00:08:53.018 "data_offset": 2048, 00:08:53.018 "data_size": 63488 00:08:53.018 } 00:08:53.018 ] 00:08:53.018 }' 00:08:53.018 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.018 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.585 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:53.585 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:53.585 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.585 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.585 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.585 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.585 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.585 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.585 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.585 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.585 [2024-11-15 10:37:14.478986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.585 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.585 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.585 "name": "raid_bdev1", 00:08:53.585 "aliases": [ 00:08:53.585 "d0181442-828d-4928-b6ba-075dced7a1a3" 00:08:53.585 ], 00:08:53.585 "product_name": "Raid Volume", 00:08:53.585 "block_size": 512, 00:08:53.585 "num_blocks": 190464, 00:08:53.585 "uuid": "d0181442-828d-4928-b6ba-075dced7a1a3", 00:08:53.585 "assigned_rate_limits": { 00:08:53.585 "rw_ios_per_sec": 0, 00:08:53.585 "rw_mbytes_per_sec": 0, 00:08:53.585 "r_mbytes_per_sec": 0, 00:08:53.585 "w_mbytes_per_sec": 0 00:08:53.585 }, 00:08:53.585 "claimed": false, 00:08:53.585 "zoned": false, 00:08:53.585 "supported_io_types": { 00:08:53.585 "read": true, 00:08:53.585 "write": true, 00:08:53.585 "unmap": true, 00:08:53.585 "flush": true, 00:08:53.585 "reset": true, 00:08:53.585 "nvme_admin": false, 00:08:53.585 "nvme_io": false, 00:08:53.585 "nvme_io_md": false, 00:08:53.585 "write_zeroes": true, 00:08:53.585 "zcopy": false, 00:08:53.585 "get_zone_info": false, 00:08:53.585 "zone_management": false, 00:08:53.585 "zone_append": false, 00:08:53.585 "compare": false, 00:08:53.585 "compare_and_write": false, 00:08:53.585 "abort": false, 00:08:53.585 "seek_hole": false, 00:08:53.585 "seek_data": false, 00:08:53.585 "copy": false, 00:08:53.585 "nvme_iov_md": false 00:08:53.585 }, 00:08:53.585 "memory_domains": [ 00:08:53.585 { 00:08:53.585 "dma_device_id": "system", 00:08:53.585 "dma_device_type": 1 00:08:53.585 }, 00:08:53.585 { 00:08:53.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.585 "dma_device_type": 2 00:08:53.585 }, 00:08:53.585 { 00:08:53.585 "dma_device_id": "system", 00:08:53.585 "dma_device_type": 1 00:08:53.585 }, 00:08:53.585 { 00:08:53.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.585 "dma_device_type": 2 00:08:53.585 }, 00:08:53.585 { 00:08:53.585 "dma_device_id": "system", 00:08:53.585 "dma_device_type": 1 00:08:53.585 }, 00:08:53.585 { 00:08:53.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.585 "dma_device_type": 2 00:08:53.586 } 00:08:53.586 ], 00:08:53.586 "driver_specific": { 00:08:53.586 "raid": { 00:08:53.586 "uuid": "d0181442-828d-4928-b6ba-075dced7a1a3", 00:08:53.586 "strip_size_kb": 64, 00:08:53.586 "state": "online", 00:08:53.586 "raid_level": "raid0", 00:08:53.586 "superblock": true, 00:08:53.586 "num_base_bdevs": 3, 00:08:53.586 "num_base_bdevs_discovered": 3, 00:08:53.586 "num_base_bdevs_operational": 3, 00:08:53.586 "base_bdevs_list": [ 00:08:53.586 { 00:08:53.586 "name": "pt1", 00:08:53.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.586 "is_configured": true, 00:08:53.586 "data_offset": 2048, 00:08:53.586 "data_size": 63488 00:08:53.586 }, 00:08:53.586 { 00:08:53.586 "name": "pt2", 00:08:53.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.586 "is_configured": true, 00:08:53.586 "data_offset": 2048, 00:08:53.586 "data_size": 63488 00:08:53.586 }, 00:08:53.586 { 00:08:53.586 "name": "pt3", 00:08:53.586 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:53.586 "is_configured": true, 00:08:53.586 "data_offset": 2048, 00:08:53.586 "data_size": 63488 00:08:53.586 } 00:08:53.586 ] 00:08:53.586 } 00:08:53.586 } 00:08:53.586 }' 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:53.586 pt2 00:08:53.586 pt3' 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.586 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:53.855 [2024-11-15 10:37:14.791002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d0181442-828d-4928-b6ba-075dced7a1a3 '!=' d0181442-828d-4928-b6ba-075dced7a1a3 ']' 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65063 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65063 ']' 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65063 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65063 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.855 killing process with pid 65063 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65063' 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65063 00:08:53.855 [2024-11-15 10:37:14.872246] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:53.855 10:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65063 00:08:53.855 [2024-11-15 10:37:14.872380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.855 [2024-11-15 10:37:14.872459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.855 [2024-11-15 10:37:14.872479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:54.162 [2024-11-15 10:37:15.142975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.094 10:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:55.094 00:08:55.094 real 0m5.675s 00:08:55.094 user 0m8.513s 00:08:55.094 sys 0m0.832s 00:08:55.094 10:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.094 10:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.094 ************************************ 00:08:55.094 END TEST raid_superblock_test 00:08:55.094 ************************************ 00:08:55.353 10:37:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:55.353 10:37:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:55.353 10:37:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.353 10:37:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:55.353 ************************************ 00:08:55.353 START TEST raid_read_error_test 00:08:55.353 ************************************ 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2H8vjm5Jlm 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65321 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65321 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65321 ']' 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.353 10:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.353 [2024-11-15 10:37:16.406054] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:08:55.353 [2024-11-15 10:37:16.406245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65321 ] 00:08:55.611 [2024-11-15 10:37:16.587598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.611 [2024-11-15 10:37:16.720035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.869 [2024-11-15 10:37:16.932873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.869 [2024-11-15 10:37:16.932957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.435 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.435 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:56.435 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:56.435 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:56.435 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.435 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.435 BaseBdev1_malloc 00:08:56.435 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.435 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:56.435 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.435 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.435 true 00:08:56.435 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.435 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.436 [2024-11-15 10:37:17.478665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:56.436 [2024-11-15 10:37:17.478733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.436 [2024-11-15 10:37:17.478761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:56.436 [2024-11-15 10:37:17.478779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.436 [2024-11-15 10:37:17.481651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.436 [2024-11-15 10:37:17.481705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:56.436 BaseBdev1 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.436 BaseBdev2_malloc 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.436 true 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.436 [2024-11-15 10:37:17.540642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:56.436 [2024-11-15 10:37:17.540715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.436 [2024-11-15 10:37:17.540742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:56.436 [2024-11-15 10:37:17.540758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.436 [2024-11-15 10:37:17.543662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.436 [2024-11-15 10:37:17.543716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:56.436 BaseBdev2 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.436 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.695 BaseBdev3_malloc 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.695 true 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.695 [2024-11-15 10:37:17.612861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:56.695 [2024-11-15 10:37:17.612928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.695 [2024-11-15 10:37:17.612955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:56.695 [2024-11-15 10:37:17.612972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.695 [2024-11-15 10:37:17.615817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.695 [2024-11-15 10:37:17.615868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:56.695 BaseBdev3 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.695 [2024-11-15 10:37:17.620950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.695 [2024-11-15 10:37:17.623387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.695 [2024-11-15 10:37:17.623573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.695 [2024-11-15 10:37:17.623840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:56.695 [2024-11-15 10:37:17.623873] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:56.695 [2024-11-15 10:37:17.624188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:56.695 [2024-11-15 10:37:17.624413] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:56.695 [2024-11-15 10:37:17.624446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:56.695 [2024-11-15 10:37:17.624668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.695 "name": "raid_bdev1", 00:08:56.695 "uuid": "29a3f80a-9bc8-48ac-a9ee-3cafa3a27732", 00:08:56.695 "strip_size_kb": 64, 00:08:56.695 "state": "online", 00:08:56.695 "raid_level": "raid0", 00:08:56.695 "superblock": true, 00:08:56.695 "num_base_bdevs": 3, 00:08:56.695 "num_base_bdevs_discovered": 3, 00:08:56.695 "num_base_bdevs_operational": 3, 00:08:56.695 "base_bdevs_list": [ 00:08:56.695 { 00:08:56.695 "name": "BaseBdev1", 00:08:56.695 "uuid": "f7e28689-6419-5a05-8d11-a0204a0fdf53", 00:08:56.695 "is_configured": true, 00:08:56.695 "data_offset": 2048, 00:08:56.695 "data_size": 63488 00:08:56.695 }, 00:08:56.695 { 00:08:56.695 "name": "BaseBdev2", 00:08:56.695 "uuid": "34bc9b51-7f0c-5431-91aa-2dcac7adb64e", 00:08:56.695 "is_configured": true, 00:08:56.695 "data_offset": 2048, 00:08:56.695 "data_size": 63488 00:08:56.695 }, 00:08:56.695 { 00:08:56.695 "name": "BaseBdev3", 00:08:56.695 "uuid": "808b23cc-39ed-587b-9505-f505302823c0", 00:08:56.695 "is_configured": true, 00:08:56.695 "data_offset": 2048, 00:08:56.695 "data_size": 63488 00:08:56.695 } 00:08:56.695 ] 00:08:56.695 }' 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.695 10:37:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.261 10:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:57.261 10:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:57.261 [2024-11-15 10:37:18.250531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.240 "name": "raid_bdev1", 00:08:58.240 "uuid": "29a3f80a-9bc8-48ac-a9ee-3cafa3a27732", 00:08:58.240 "strip_size_kb": 64, 00:08:58.240 "state": "online", 00:08:58.240 "raid_level": "raid0", 00:08:58.240 "superblock": true, 00:08:58.240 "num_base_bdevs": 3, 00:08:58.240 "num_base_bdevs_discovered": 3, 00:08:58.240 "num_base_bdevs_operational": 3, 00:08:58.240 "base_bdevs_list": [ 00:08:58.240 { 00:08:58.240 "name": "BaseBdev1", 00:08:58.240 "uuid": "f7e28689-6419-5a05-8d11-a0204a0fdf53", 00:08:58.240 "is_configured": true, 00:08:58.240 "data_offset": 2048, 00:08:58.240 "data_size": 63488 00:08:58.240 }, 00:08:58.240 { 00:08:58.240 "name": "BaseBdev2", 00:08:58.240 "uuid": "34bc9b51-7f0c-5431-91aa-2dcac7adb64e", 00:08:58.240 "is_configured": true, 00:08:58.240 "data_offset": 2048, 00:08:58.240 "data_size": 63488 00:08:58.240 }, 00:08:58.240 { 00:08:58.240 "name": "BaseBdev3", 00:08:58.240 "uuid": "808b23cc-39ed-587b-9505-f505302823c0", 00:08:58.240 "is_configured": true, 00:08:58.240 "data_offset": 2048, 00:08:58.240 "data_size": 63488 00:08:58.240 } 00:08:58.240 ] 00:08:58.240 }' 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.240 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.499 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:58.499 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.499 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.499 [2024-11-15 10:37:19.657685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.499 [2024-11-15 10:37:19.657731] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.757 [2024-11-15 10:37:19.661113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.757 [2024-11-15 10:37:19.661178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.757 [2024-11-15 10:37:19.661231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.757 [2024-11-15 10:37:19.661246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:58.757 { 00:08:58.757 "results": [ 00:08:58.757 { 00:08:58.757 "job": "raid_bdev1", 00:08:58.757 "core_mask": "0x1", 00:08:58.757 "workload": "randrw", 00:08:58.757 "percentage": 50, 00:08:58.757 "status": "finished", 00:08:58.757 "queue_depth": 1, 00:08:58.757 "io_size": 131072, 00:08:58.757 "runtime": 1.404635, 00:08:58.757 "iops": 10362.834473012563, 00:08:58.757 "mibps": 1295.3543091265703, 00:08:58.757 "io_failed": 1, 00:08:58.757 "io_timeout": 0, 00:08:58.757 "avg_latency_us": 135.07106734029864, 00:08:58.757 "min_latency_us": 40.261818181818185, 00:08:58.757 "max_latency_us": 1846.9236363636364 00:08:58.757 } 00:08:58.757 ], 00:08:58.757 "core_count": 1 00:08:58.757 } 00:08:58.757 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.757 10:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65321 00:08:58.757 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65321 ']' 00:08:58.757 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65321 00:08:58.757 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:58.757 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.757 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65321 00:08:58.757 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.757 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.757 killing process with pid 65321 00:08:58.757 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65321' 00:08:58.757 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65321 00:08:58.757 [2024-11-15 10:37:19.694972] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:58.757 10:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65321 00:08:58.757 [2024-11-15 10:37:19.906115] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.132 10:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2H8vjm5Jlm 00:09:00.132 10:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:00.132 10:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:00.132 10:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:00.132 10:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:00.132 10:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.132 10:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.132 10:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:00.132 00:09:00.132 real 0m4.744s 00:09:00.132 user 0m5.830s 00:09:00.132 sys 0m0.607s 00:09:00.132 10:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.132 ************************************ 00:09:00.132 END TEST raid_read_error_test 00:09:00.132 10:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.132 ************************************ 00:09:00.132 10:37:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:00.132 10:37:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:00.132 10:37:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.132 10:37:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.132 ************************************ 00:09:00.132 START TEST raid_write_error_test 00:09:00.132 ************************************ 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QhCdKSuWQi 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65467 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65467 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65467 ']' 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.132 10:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.132 [2024-11-15 10:37:21.217932] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:09:00.132 [2024-11-15 10:37:21.218359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65467 ] 00:09:00.391 [2024-11-15 10:37:21.408683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.649 [2024-11-15 10:37:21.559655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.649 [2024-11-15 10:37:21.767954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.649 [2024-11-15 10:37:21.768026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.247 BaseBdev1_malloc 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.247 true 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.247 [2024-11-15 10:37:22.269236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:01.247 [2024-11-15 10:37:22.269303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.247 [2024-11-15 10:37:22.269333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:01.247 [2024-11-15 10:37:22.269350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.247 [2024-11-15 10:37:22.272115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.247 [2024-11-15 10:37:22.272164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:01.247 BaseBdev1 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.247 BaseBdev2_malloc 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.247 true 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.247 [2024-11-15 10:37:22.325571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:01.247 [2024-11-15 10:37:22.325637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.247 [2024-11-15 10:37:22.325662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:01.247 [2024-11-15 10:37:22.325679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.247 [2024-11-15 10:37:22.328437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.247 [2024-11-15 10:37:22.328498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:01.247 BaseBdev2 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.247 BaseBdev3_malloc 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.247 true 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.247 [2024-11-15 10:37:22.395320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:01.247 [2024-11-15 10:37:22.395400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.247 [2024-11-15 10:37:22.395427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:01.247 [2024-11-15 10:37:22.395444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.247 [2024-11-15 10:37:22.398251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.247 [2024-11-15 10:37:22.398299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:01.247 BaseBdev3 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.247 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.247 [2024-11-15 10:37:22.403425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.247 [2024-11-15 10:37:22.405933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.505 [2024-11-15 10:37:22.406056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.505 [2024-11-15 10:37:22.406326] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:01.505 [2024-11-15 10:37:22.406359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:01.505 [2024-11-15 10:37:22.406714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:01.505 [2024-11-15 10:37:22.406947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:01.505 [2024-11-15 10:37:22.406980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:01.505 [2024-11-15 10:37:22.407175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.505 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.505 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:01.505 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.505 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.505 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.505 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.505 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.505 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.505 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.505 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.506 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.506 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.506 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.506 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.506 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.506 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.506 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.506 "name": "raid_bdev1", 00:09:01.506 "uuid": "ebec2878-d840-4c42-8c44-941a4f90baad", 00:09:01.506 "strip_size_kb": 64, 00:09:01.506 "state": "online", 00:09:01.506 "raid_level": "raid0", 00:09:01.506 "superblock": true, 00:09:01.506 "num_base_bdevs": 3, 00:09:01.506 "num_base_bdevs_discovered": 3, 00:09:01.506 "num_base_bdevs_operational": 3, 00:09:01.506 "base_bdevs_list": [ 00:09:01.506 { 00:09:01.506 "name": "BaseBdev1", 00:09:01.506 "uuid": "d976eb54-d3dd-52ba-b03c-2fd63940c342", 00:09:01.506 "is_configured": true, 00:09:01.506 "data_offset": 2048, 00:09:01.506 "data_size": 63488 00:09:01.506 }, 00:09:01.506 { 00:09:01.506 "name": "BaseBdev2", 00:09:01.506 "uuid": "b4c81439-b4dd-5b40-9813-c456eea237a9", 00:09:01.506 "is_configured": true, 00:09:01.506 "data_offset": 2048, 00:09:01.506 "data_size": 63488 00:09:01.506 }, 00:09:01.506 { 00:09:01.506 "name": "BaseBdev3", 00:09:01.506 "uuid": "5b77defb-c1e6-5d46-b7a4-8a01028248f5", 00:09:01.506 "is_configured": true, 00:09:01.506 "data_offset": 2048, 00:09:01.506 "data_size": 63488 00:09:01.506 } 00:09:01.506 ] 00:09:01.506 }' 00:09:01.506 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.506 10:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.072 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:02.072 10:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:02.072 [2024-11-15 10:37:23.073024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.006 10:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.007 10:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.007 "name": "raid_bdev1", 00:09:03.007 "uuid": "ebec2878-d840-4c42-8c44-941a4f90baad", 00:09:03.007 "strip_size_kb": 64, 00:09:03.007 "state": "online", 00:09:03.007 "raid_level": "raid0", 00:09:03.007 "superblock": true, 00:09:03.007 "num_base_bdevs": 3, 00:09:03.007 "num_base_bdevs_discovered": 3, 00:09:03.007 "num_base_bdevs_operational": 3, 00:09:03.007 "base_bdevs_list": [ 00:09:03.007 { 00:09:03.007 "name": "BaseBdev1", 00:09:03.007 "uuid": "d976eb54-d3dd-52ba-b03c-2fd63940c342", 00:09:03.007 "is_configured": true, 00:09:03.007 "data_offset": 2048, 00:09:03.007 "data_size": 63488 00:09:03.007 }, 00:09:03.007 { 00:09:03.007 "name": "BaseBdev2", 00:09:03.007 "uuid": "b4c81439-b4dd-5b40-9813-c456eea237a9", 00:09:03.007 "is_configured": true, 00:09:03.007 "data_offset": 2048, 00:09:03.007 "data_size": 63488 00:09:03.007 }, 00:09:03.007 { 00:09:03.007 "name": "BaseBdev3", 00:09:03.007 "uuid": "5b77defb-c1e6-5d46-b7a4-8a01028248f5", 00:09:03.007 "is_configured": true, 00:09:03.007 "data_offset": 2048, 00:09:03.007 "data_size": 63488 00:09:03.007 } 00:09:03.007 ] 00:09:03.007 }' 00:09:03.007 10:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.007 10:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.573 10:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:03.573 10:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.573 10:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.573 [2024-11-15 10:37:24.451916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:03.573 [2024-11-15 10:37:24.451958] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.573 [2024-11-15 10:37:24.455404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.573 [2024-11-15 10:37:24.455470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.573 [2024-11-15 10:37:24.455540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.573 [2024-11-15 10:37:24.455557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:03.573 { 00:09:03.573 "results": [ 00:09:03.573 { 00:09:03.573 "job": "raid_bdev1", 00:09:03.573 "core_mask": "0x1", 00:09:03.573 "workload": "randrw", 00:09:03.573 "percentage": 50, 00:09:03.573 "status": "finished", 00:09:03.573 "queue_depth": 1, 00:09:03.573 "io_size": 131072, 00:09:03.573 "runtime": 1.376333, 00:09:03.573 "iops": 10503.272100574497, 00:09:03.573 "mibps": 1312.9090125718121, 00:09:03.573 "io_failed": 1, 00:09:03.573 "io_timeout": 0, 00:09:03.573 "avg_latency_us": 133.2904364667635, 00:09:03.573 "min_latency_us": 41.42545454545454, 00:09:03.573 "max_latency_us": 1832.0290909090909 00:09:03.573 } 00:09:03.573 ], 00:09:03.573 "core_count": 1 00:09:03.573 } 00:09:03.573 10:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.573 10:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65467 00:09:03.573 10:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65467 ']' 00:09:03.573 10:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65467 00:09:03.573 10:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:03.573 10:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.573 10:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65467 00:09:03.573 killing process with pid 65467 00:09:03.573 10:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.573 10:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.574 10:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65467' 00:09:03.574 10:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65467 00:09:03.574 10:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65467 00:09:03.574 [2024-11-15 10:37:24.492850] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.574 [2024-11-15 10:37:24.705173] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.031 10:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QhCdKSuWQi 00:09:05.031 10:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:05.031 10:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:05.031 10:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:05.031 10:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:05.031 10:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:05.031 10:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:05.031 10:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:05.031 00:09:05.031 real 0m4.749s 00:09:05.031 user 0m5.873s 00:09:05.031 sys 0m0.623s 00:09:05.031 10:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.031 10:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.031 ************************************ 00:09:05.031 END TEST raid_write_error_test 00:09:05.031 ************************************ 00:09:05.031 10:37:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:05.031 10:37:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:05.031 10:37:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:05.031 10:37:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.031 10:37:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.031 ************************************ 00:09:05.031 START TEST raid_state_function_test 00:09:05.031 ************************************ 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65616 00:09:05.031 Process raid pid: 65616 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65616' 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65616 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65616 ']' 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.031 10:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.031 [2024-11-15 10:37:25.982924] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:09:05.031 [2024-11-15 10:37:25.983085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.031 [2024-11-15 10:37:26.162938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.289 [2024-11-15 10:37:26.294888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.546 [2024-11-15 10:37:26.504341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.546 [2024-11-15 10:37:26.504399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.113 10:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.113 10:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:06.113 10:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:06.113 10:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.113 10:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.113 [2024-11-15 10:37:27.005801] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.113 [2024-11-15 10:37:27.005867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.113 [2024-11-15 10:37:27.005885] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.113 [2024-11-15 10:37:27.005901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.113 [2024-11-15 10:37:27.005911] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.113 [2024-11-15 10:37:27.005926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.113 "name": "Existed_Raid", 00:09:06.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.113 "strip_size_kb": 64, 00:09:06.113 "state": "configuring", 00:09:06.113 "raid_level": "concat", 00:09:06.113 "superblock": false, 00:09:06.113 "num_base_bdevs": 3, 00:09:06.113 "num_base_bdevs_discovered": 0, 00:09:06.113 "num_base_bdevs_operational": 3, 00:09:06.113 "base_bdevs_list": [ 00:09:06.113 { 00:09:06.113 "name": "BaseBdev1", 00:09:06.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.113 "is_configured": false, 00:09:06.113 "data_offset": 0, 00:09:06.113 "data_size": 0 00:09:06.113 }, 00:09:06.113 { 00:09:06.113 "name": "BaseBdev2", 00:09:06.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.113 "is_configured": false, 00:09:06.113 "data_offset": 0, 00:09:06.113 "data_size": 0 00:09:06.113 }, 00:09:06.113 { 00:09:06.113 "name": "BaseBdev3", 00:09:06.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.113 "is_configured": false, 00:09:06.113 "data_offset": 0, 00:09:06.113 "data_size": 0 00:09:06.113 } 00:09:06.113 ] 00:09:06.113 }' 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.113 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.371 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:06.371 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.371 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.371 [2024-11-15 10:37:27.509875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:06.371 [2024-11-15 10:37:27.509920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:06.371 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.371 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:06.371 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.371 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.371 [2024-11-15 10:37:27.521861] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.371 [2024-11-15 10:37:27.522048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.371 [2024-11-15 10:37:27.522176] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.371 [2024-11-15 10:37:27.522241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.371 [2024-11-15 10:37:27.522356] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.371 [2024-11-15 10:37:27.522416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.371 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.371 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:06.371 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.371 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.629 [2024-11-15 10:37:27.566943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.629 BaseBdev1 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.629 [ 00:09:06.629 { 00:09:06.629 "name": "BaseBdev1", 00:09:06.629 "aliases": [ 00:09:06.629 "d198de25-114c-461e-99db-14246ca609d6" 00:09:06.629 ], 00:09:06.629 "product_name": "Malloc disk", 00:09:06.629 "block_size": 512, 00:09:06.629 "num_blocks": 65536, 00:09:06.629 "uuid": "d198de25-114c-461e-99db-14246ca609d6", 00:09:06.629 "assigned_rate_limits": { 00:09:06.629 "rw_ios_per_sec": 0, 00:09:06.629 "rw_mbytes_per_sec": 0, 00:09:06.629 "r_mbytes_per_sec": 0, 00:09:06.629 "w_mbytes_per_sec": 0 00:09:06.629 }, 00:09:06.629 "claimed": true, 00:09:06.629 "claim_type": "exclusive_write", 00:09:06.629 "zoned": false, 00:09:06.629 "supported_io_types": { 00:09:06.629 "read": true, 00:09:06.629 "write": true, 00:09:06.629 "unmap": true, 00:09:06.629 "flush": true, 00:09:06.629 "reset": true, 00:09:06.629 "nvme_admin": false, 00:09:06.629 "nvme_io": false, 00:09:06.629 "nvme_io_md": false, 00:09:06.629 "write_zeroes": true, 00:09:06.629 "zcopy": true, 00:09:06.629 "get_zone_info": false, 00:09:06.629 "zone_management": false, 00:09:06.629 "zone_append": false, 00:09:06.629 "compare": false, 00:09:06.629 "compare_and_write": false, 00:09:06.629 "abort": true, 00:09:06.629 "seek_hole": false, 00:09:06.629 "seek_data": false, 00:09:06.629 "copy": true, 00:09:06.629 "nvme_iov_md": false 00:09:06.629 }, 00:09:06.629 "memory_domains": [ 00:09:06.629 { 00:09:06.629 "dma_device_id": "system", 00:09:06.629 "dma_device_type": 1 00:09:06.629 }, 00:09:06.629 { 00:09:06.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.629 "dma_device_type": 2 00:09:06.629 } 00:09:06.629 ], 00:09:06.629 "driver_specific": {} 00:09:06.629 } 00:09:06.629 ] 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.629 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.629 "name": "Existed_Raid", 00:09:06.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.629 "strip_size_kb": 64, 00:09:06.629 "state": "configuring", 00:09:06.629 "raid_level": "concat", 00:09:06.629 "superblock": false, 00:09:06.629 "num_base_bdevs": 3, 00:09:06.629 "num_base_bdevs_discovered": 1, 00:09:06.630 "num_base_bdevs_operational": 3, 00:09:06.630 "base_bdevs_list": [ 00:09:06.630 { 00:09:06.630 "name": "BaseBdev1", 00:09:06.630 "uuid": "d198de25-114c-461e-99db-14246ca609d6", 00:09:06.630 "is_configured": true, 00:09:06.630 "data_offset": 0, 00:09:06.630 "data_size": 65536 00:09:06.630 }, 00:09:06.630 { 00:09:06.630 "name": "BaseBdev2", 00:09:06.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.630 "is_configured": false, 00:09:06.630 "data_offset": 0, 00:09:06.630 "data_size": 0 00:09:06.630 }, 00:09:06.630 { 00:09:06.630 "name": "BaseBdev3", 00:09:06.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.630 "is_configured": false, 00:09:06.630 "data_offset": 0, 00:09:06.630 "data_size": 0 00:09:06.630 } 00:09:06.630 ] 00:09:06.630 }' 00:09:06.630 10:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.630 10:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.195 [2024-11-15 10:37:28.087154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.195 [2024-11-15 10:37:28.087223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.195 [2024-11-15 10:37:28.095166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.195 [2024-11-15 10:37:28.097664] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.195 [2024-11-15 10:37:28.097722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.195 [2024-11-15 10:37:28.097745] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.195 [2024-11-15 10:37:28.097760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.195 "name": "Existed_Raid", 00:09:07.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.195 "strip_size_kb": 64, 00:09:07.195 "state": "configuring", 00:09:07.195 "raid_level": "concat", 00:09:07.195 "superblock": false, 00:09:07.195 "num_base_bdevs": 3, 00:09:07.195 "num_base_bdevs_discovered": 1, 00:09:07.195 "num_base_bdevs_operational": 3, 00:09:07.195 "base_bdevs_list": [ 00:09:07.195 { 00:09:07.195 "name": "BaseBdev1", 00:09:07.195 "uuid": "d198de25-114c-461e-99db-14246ca609d6", 00:09:07.195 "is_configured": true, 00:09:07.195 "data_offset": 0, 00:09:07.195 "data_size": 65536 00:09:07.195 }, 00:09:07.195 { 00:09:07.195 "name": "BaseBdev2", 00:09:07.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.195 "is_configured": false, 00:09:07.195 "data_offset": 0, 00:09:07.195 "data_size": 0 00:09:07.195 }, 00:09:07.195 { 00:09:07.195 "name": "BaseBdev3", 00:09:07.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.195 "is_configured": false, 00:09:07.195 "data_offset": 0, 00:09:07.195 "data_size": 0 00:09:07.195 } 00:09:07.195 ] 00:09:07.195 }' 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.195 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.452 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:07.452 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.452 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.711 [2024-11-15 10:37:28.621818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.711 BaseBdev2 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.711 [ 00:09:07.711 { 00:09:07.711 "name": "BaseBdev2", 00:09:07.711 "aliases": [ 00:09:07.711 "61dfada3-0dc5-4bce-b473-1f9c6b86712f" 00:09:07.711 ], 00:09:07.711 "product_name": "Malloc disk", 00:09:07.711 "block_size": 512, 00:09:07.711 "num_blocks": 65536, 00:09:07.711 "uuid": "61dfada3-0dc5-4bce-b473-1f9c6b86712f", 00:09:07.711 "assigned_rate_limits": { 00:09:07.711 "rw_ios_per_sec": 0, 00:09:07.711 "rw_mbytes_per_sec": 0, 00:09:07.711 "r_mbytes_per_sec": 0, 00:09:07.711 "w_mbytes_per_sec": 0 00:09:07.711 }, 00:09:07.711 "claimed": true, 00:09:07.711 "claim_type": "exclusive_write", 00:09:07.711 "zoned": false, 00:09:07.711 "supported_io_types": { 00:09:07.711 "read": true, 00:09:07.711 "write": true, 00:09:07.711 "unmap": true, 00:09:07.711 "flush": true, 00:09:07.711 "reset": true, 00:09:07.711 "nvme_admin": false, 00:09:07.711 "nvme_io": false, 00:09:07.711 "nvme_io_md": false, 00:09:07.711 "write_zeroes": true, 00:09:07.711 "zcopy": true, 00:09:07.711 "get_zone_info": false, 00:09:07.711 "zone_management": false, 00:09:07.711 "zone_append": false, 00:09:07.711 "compare": false, 00:09:07.711 "compare_and_write": false, 00:09:07.711 "abort": true, 00:09:07.711 "seek_hole": false, 00:09:07.711 "seek_data": false, 00:09:07.711 "copy": true, 00:09:07.711 "nvme_iov_md": false 00:09:07.711 }, 00:09:07.711 "memory_domains": [ 00:09:07.711 { 00:09:07.711 "dma_device_id": "system", 00:09:07.711 "dma_device_type": 1 00:09:07.711 }, 00:09:07.711 { 00:09:07.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.711 "dma_device_type": 2 00:09:07.711 } 00:09:07.711 ], 00:09:07.711 "driver_specific": {} 00:09:07.711 } 00:09:07.711 ] 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.711 "name": "Existed_Raid", 00:09:07.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.711 "strip_size_kb": 64, 00:09:07.711 "state": "configuring", 00:09:07.711 "raid_level": "concat", 00:09:07.711 "superblock": false, 00:09:07.711 "num_base_bdevs": 3, 00:09:07.711 "num_base_bdevs_discovered": 2, 00:09:07.711 "num_base_bdevs_operational": 3, 00:09:07.711 "base_bdevs_list": [ 00:09:07.711 { 00:09:07.711 "name": "BaseBdev1", 00:09:07.711 "uuid": "d198de25-114c-461e-99db-14246ca609d6", 00:09:07.711 "is_configured": true, 00:09:07.711 "data_offset": 0, 00:09:07.711 "data_size": 65536 00:09:07.711 }, 00:09:07.711 { 00:09:07.711 "name": "BaseBdev2", 00:09:07.711 "uuid": "61dfada3-0dc5-4bce-b473-1f9c6b86712f", 00:09:07.711 "is_configured": true, 00:09:07.711 "data_offset": 0, 00:09:07.711 "data_size": 65536 00:09:07.711 }, 00:09:07.711 { 00:09:07.711 "name": "BaseBdev3", 00:09:07.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.711 "is_configured": false, 00:09:07.711 "data_offset": 0, 00:09:07.711 "data_size": 0 00:09:07.711 } 00:09:07.711 ] 00:09:07.711 }' 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.711 10:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.277 [2024-11-15 10:37:29.221428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.277 [2024-11-15 10:37:29.221488] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:08.277 [2024-11-15 10:37:29.221538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:08.277 [2024-11-15 10:37:29.221912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:08.277 [2024-11-15 10:37:29.222152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:08.277 [2024-11-15 10:37:29.222169] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:08.277 [2024-11-15 10:37:29.222528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.277 BaseBdev3 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.277 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.278 [ 00:09:08.278 { 00:09:08.278 "name": "BaseBdev3", 00:09:08.278 "aliases": [ 00:09:08.278 "2622f52d-06e2-42cf-a05c-4f9f47b61c0c" 00:09:08.278 ], 00:09:08.278 "product_name": "Malloc disk", 00:09:08.278 "block_size": 512, 00:09:08.278 "num_blocks": 65536, 00:09:08.278 "uuid": "2622f52d-06e2-42cf-a05c-4f9f47b61c0c", 00:09:08.278 "assigned_rate_limits": { 00:09:08.278 "rw_ios_per_sec": 0, 00:09:08.278 "rw_mbytes_per_sec": 0, 00:09:08.278 "r_mbytes_per_sec": 0, 00:09:08.278 "w_mbytes_per_sec": 0 00:09:08.278 }, 00:09:08.278 "claimed": true, 00:09:08.278 "claim_type": "exclusive_write", 00:09:08.278 "zoned": false, 00:09:08.278 "supported_io_types": { 00:09:08.278 "read": true, 00:09:08.278 "write": true, 00:09:08.278 "unmap": true, 00:09:08.278 "flush": true, 00:09:08.278 "reset": true, 00:09:08.278 "nvme_admin": false, 00:09:08.278 "nvme_io": false, 00:09:08.278 "nvme_io_md": false, 00:09:08.278 "write_zeroes": true, 00:09:08.278 "zcopy": true, 00:09:08.278 "get_zone_info": false, 00:09:08.278 "zone_management": false, 00:09:08.278 "zone_append": false, 00:09:08.278 "compare": false, 00:09:08.278 "compare_and_write": false, 00:09:08.278 "abort": true, 00:09:08.278 "seek_hole": false, 00:09:08.278 "seek_data": false, 00:09:08.278 "copy": true, 00:09:08.278 "nvme_iov_md": false 00:09:08.278 }, 00:09:08.278 "memory_domains": [ 00:09:08.278 { 00:09:08.278 "dma_device_id": "system", 00:09:08.278 "dma_device_type": 1 00:09:08.278 }, 00:09:08.278 { 00:09:08.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.278 "dma_device_type": 2 00:09:08.278 } 00:09:08.278 ], 00:09:08.278 "driver_specific": {} 00:09:08.278 } 00:09:08.278 ] 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.278 "name": "Existed_Raid", 00:09:08.278 "uuid": "0261f756-eb18-4905-96da-b03e9d5f6da7", 00:09:08.278 "strip_size_kb": 64, 00:09:08.278 "state": "online", 00:09:08.278 "raid_level": "concat", 00:09:08.278 "superblock": false, 00:09:08.278 "num_base_bdevs": 3, 00:09:08.278 "num_base_bdevs_discovered": 3, 00:09:08.278 "num_base_bdevs_operational": 3, 00:09:08.278 "base_bdevs_list": [ 00:09:08.278 { 00:09:08.278 "name": "BaseBdev1", 00:09:08.278 "uuid": "d198de25-114c-461e-99db-14246ca609d6", 00:09:08.278 "is_configured": true, 00:09:08.278 "data_offset": 0, 00:09:08.278 "data_size": 65536 00:09:08.278 }, 00:09:08.278 { 00:09:08.278 "name": "BaseBdev2", 00:09:08.278 "uuid": "61dfada3-0dc5-4bce-b473-1f9c6b86712f", 00:09:08.278 "is_configured": true, 00:09:08.278 "data_offset": 0, 00:09:08.278 "data_size": 65536 00:09:08.278 }, 00:09:08.278 { 00:09:08.278 "name": "BaseBdev3", 00:09:08.278 "uuid": "2622f52d-06e2-42cf-a05c-4f9f47b61c0c", 00:09:08.278 "is_configured": true, 00:09:08.278 "data_offset": 0, 00:09:08.278 "data_size": 65536 00:09:08.278 } 00:09:08.278 ] 00:09:08.278 }' 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.278 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.842 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:08.842 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:08.842 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.842 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.842 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.842 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.842 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:08.842 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.842 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.842 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.842 [2024-11-15 10:37:29.774006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.842 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.842 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.842 "name": "Existed_Raid", 00:09:08.842 "aliases": [ 00:09:08.842 "0261f756-eb18-4905-96da-b03e9d5f6da7" 00:09:08.842 ], 00:09:08.842 "product_name": "Raid Volume", 00:09:08.842 "block_size": 512, 00:09:08.842 "num_blocks": 196608, 00:09:08.842 "uuid": "0261f756-eb18-4905-96da-b03e9d5f6da7", 00:09:08.842 "assigned_rate_limits": { 00:09:08.842 "rw_ios_per_sec": 0, 00:09:08.842 "rw_mbytes_per_sec": 0, 00:09:08.842 "r_mbytes_per_sec": 0, 00:09:08.842 "w_mbytes_per_sec": 0 00:09:08.842 }, 00:09:08.842 "claimed": false, 00:09:08.842 "zoned": false, 00:09:08.842 "supported_io_types": { 00:09:08.842 "read": true, 00:09:08.842 "write": true, 00:09:08.842 "unmap": true, 00:09:08.842 "flush": true, 00:09:08.842 "reset": true, 00:09:08.842 "nvme_admin": false, 00:09:08.842 "nvme_io": false, 00:09:08.842 "nvme_io_md": false, 00:09:08.842 "write_zeroes": true, 00:09:08.842 "zcopy": false, 00:09:08.842 "get_zone_info": false, 00:09:08.842 "zone_management": false, 00:09:08.842 "zone_append": false, 00:09:08.842 "compare": false, 00:09:08.842 "compare_and_write": false, 00:09:08.842 "abort": false, 00:09:08.842 "seek_hole": false, 00:09:08.842 "seek_data": false, 00:09:08.842 "copy": false, 00:09:08.842 "nvme_iov_md": false 00:09:08.842 }, 00:09:08.842 "memory_domains": [ 00:09:08.842 { 00:09:08.842 "dma_device_id": "system", 00:09:08.842 "dma_device_type": 1 00:09:08.842 }, 00:09:08.842 { 00:09:08.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.842 "dma_device_type": 2 00:09:08.842 }, 00:09:08.842 { 00:09:08.842 "dma_device_id": "system", 00:09:08.842 "dma_device_type": 1 00:09:08.842 }, 00:09:08.842 { 00:09:08.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.842 "dma_device_type": 2 00:09:08.842 }, 00:09:08.842 { 00:09:08.842 "dma_device_id": "system", 00:09:08.842 "dma_device_type": 1 00:09:08.842 }, 00:09:08.842 { 00:09:08.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.842 "dma_device_type": 2 00:09:08.842 } 00:09:08.842 ], 00:09:08.842 "driver_specific": { 00:09:08.842 "raid": { 00:09:08.842 "uuid": "0261f756-eb18-4905-96da-b03e9d5f6da7", 00:09:08.842 "strip_size_kb": 64, 00:09:08.842 "state": "online", 00:09:08.842 "raid_level": "concat", 00:09:08.842 "superblock": false, 00:09:08.842 "num_base_bdevs": 3, 00:09:08.842 "num_base_bdevs_discovered": 3, 00:09:08.842 "num_base_bdevs_operational": 3, 00:09:08.842 "base_bdevs_list": [ 00:09:08.842 { 00:09:08.842 "name": "BaseBdev1", 00:09:08.842 "uuid": "d198de25-114c-461e-99db-14246ca609d6", 00:09:08.842 "is_configured": true, 00:09:08.842 "data_offset": 0, 00:09:08.842 "data_size": 65536 00:09:08.842 }, 00:09:08.842 { 00:09:08.842 "name": "BaseBdev2", 00:09:08.843 "uuid": "61dfada3-0dc5-4bce-b473-1f9c6b86712f", 00:09:08.843 "is_configured": true, 00:09:08.843 "data_offset": 0, 00:09:08.843 "data_size": 65536 00:09:08.843 }, 00:09:08.843 { 00:09:08.843 "name": "BaseBdev3", 00:09:08.843 "uuid": "2622f52d-06e2-42cf-a05c-4f9f47b61c0c", 00:09:08.843 "is_configured": true, 00:09:08.843 "data_offset": 0, 00:09:08.843 "data_size": 65536 00:09:08.843 } 00:09:08.843 ] 00:09:08.843 } 00:09:08.843 } 00:09:08.843 }' 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:08.843 BaseBdev2 00:09:08.843 BaseBdev3' 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.843 10:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.100 [2024-11-15 10:37:30.109803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.100 [2024-11-15 10:37:30.109840] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.100 [2024-11-15 10:37:30.109913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.100 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.101 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.101 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.101 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.101 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.101 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.101 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.101 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.101 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.101 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.101 "name": "Existed_Raid", 00:09:09.101 "uuid": "0261f756-eb18-4905-96da-b03e9d5f6da7", 00:09:09.101 "strip_size_kb": 64, 00:09:09.101 "state": "offline", 00:09:09.101 "raid_level": "concat", 00:09:09.101 "superblock": false, 00:09:09.101 "num_base_bdevs": 3, 00:09:09.101 "num_base_bdevs_discovered": 2, 00:09:09.101 "num_base_bdevs_operational": 2, 00:09:09.101 "base_bdevs_list": [ 00:09:09.101 { 00:09:09.101 "name": null, 00:09:09.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.101 "is_configured": false, 00:09:09.101 "data_offset": 0, 00:09:09.101 "data_size": 65536 00:09:09.101 }, 00:09:09.101 { 00:09:09.101 "name": "BaseBdev2", 00:09:09.101 "uuid": "61dfada3-0dc5-4bce-b473-1f9c6b86712f", 00:09:09.101 "is_configured": true, 00:09:09.101 "data_offset": 0, 00:09:09.101 "data_size": 65536 00:09:09.101 }, 00:09:09.101 { 00:09:09.101 "name": "BaseBdev3", 00:09:09.101 "uuid": "2622f52d-06e2-42cf-a05c-4f9f47b61c0c", 00:09:09.101 "is_configured": true, 00:09:09.101 "data_offset": 0, 00:09:09.101 "data_size": 65536 00:09:09.101 } 00:09:09.101 ] 00:09:09.101 }' 00:09:09.101 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.101 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.665 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:09.665 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.665 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:09.665 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.665 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.665 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.665 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.665 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:09.665 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:09.665 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:09.665 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.665 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.665 [2024-11-15 10:37:30.776511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.923 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.923 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:09.923 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.923 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:09.923 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.923 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.923 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.923 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.923 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:09.923 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:09.923 10:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:09.923 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.923 10:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.923 [2024-11-15 10:37:30.921738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:09.923 [2024-11-15 10:37:30.921804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.923 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.182 BaseBdev2 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.182 [ 00:09:10.182 { 00:09:10.182 "name": "BaseBdev2", 00:09:10.182 "aliases": [ 00:09:10.182 "304bc0b0-abc0-42ee-b22f-82567e2a1c64" 00:09:10.182 ], 00:09:10.182 "product_name": "Malloc disk", 00:09:10.182 "block_size": 512, 00:09:10.182 "num_blocks": 65536, 00:09:10.182 "uuid": "304bc0b0-abc0-42ee-b22f-82567e2a1c64", 00:09:10.182 "assigned_rate_limits": { 00:09:10.182 "rw_ios_per_sec": 0, 00:09:10.182 "rw_mbytes_per_sec": 0, 00:09:10.182 "r_mbytes_per_sec": 0, 00:09:10.182 "w_mbytes_per_sec": 0 00:09:10.182 }, 00:09:10.182 "claimed": false, 00:09:10.182 "zoned": false, 00:09:10.182 "supported_io_types": { 00:09:10.182 "read": true, 00:09:10.182 "write": true, 00:09:10.182 "unmap": true, 00:09:10.182 "flush": true, 00:09:10.182 "reset": true, 00:09:10.182 "nvme_admin": false, 00:09:10.182 "nvme_io": false, 00:09:10.182 "nvme_io_md": false, 00:09:10.182 "write_zeroes": true, 00:09:10.182 "zcopy": true, 00:09:10.182 "get_zone_info": false, 00:09:10.182 "zone_management": false, 00:09:10.182 "zone_append": false, 00:09:10.182 "compare": false, 00:09:10.182 "compare_and_write": false, 00:09:10.182 "abort": true, 00:09:10.182 "seek_hole": false, 00:09:10.182 "seek_data": false, 00:09:10.182 "copy": true, 00:09:10.182 "nvme_iov_md": false 00:09:10.182 }, 00:09:10.182 "memory_domains": [ 00:09:10.182 { 00:09:10.182 "dma_device_id": "system", 00:09:10.182 "dma_device_type": 1 00:09:10.182 }, 00:09:10.182 { 00:09:10.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.182 "dma_device_type": 2 00:09:10.182 } 00:09:10.182 ], 00:09:10.182 "driver_specific": {} 00:09:10.182 } 00:09:10.182 ] 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.182 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.183 BaseBdev3 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.183 [ 00:09:10.183 { 00:09:10.183 "name": "BaseBdev3", 00:09:10.183 "aliases": [ 00:09:10.183 "00cc9fa2-0334-4bbb-92ff-8c8377a07bbf" 00:09:10.183 ], 00:09:10.183 "product_name": "Malloc disk", 00:09:10.183 "block_size": 512, 00:09:10.183 "num_blocks": 65536, 00:09:10.183 "uuid": "00cc9fa2-0334-4bbb-92ff-8c8377a07bbf", 00:09:10.183 "assigned_rate_limits": { 00:09:10.183 "rw_ios_per_sec": 0, 00:09:10.183 "rw_mbytes_per_sec": 0, 00:09:10.183 "r_mbytes_per_sec": 0, 00:09:10.183 "w_mbytes_per_sec": 0 00:09:10.183 }, 00:09:10.183 "claimed": false, 00:09:10.183 "zoned": false, 00:09:10.183 "supported_io_types": { 00:09:10.183 "read": true, 00:09:10.183 "write": true, 00:09:10.183 "unmap": true, 00:09:10.183 "flush": true, 00:09:10.183 "reset": true, 00:09:10.183 "nvme_admin": false, 00:09:10.183 "nvme_io": false, 00:09:10.183 "nvme_io_md": false, 00:09:10.183 "write_zeroes": true, 00:09:10.183 "zcopy": true, 00:09:10.183 "get_zone_info": false, 00:09:10.183 "zone_management": false, 00:09:10.183 "zone_append": false, 00:09:10.183 "compare": false, 00:09:10.183 "compare_and_write": false, 00:09:10.183 "abort": true, 00:09:10.183 "seek_hole": false, 00:09:10.183 "seek_data": false, 00:09:10.183 "copy": true, 00:09:10.183 "nvme_iov_md": false 00:09:10.183 }, 00:09:10.183 "memory_domains": [ 00:09:10.183 { 00:09:10.183 "dma_device_id": "system", 00:09:10.183 "dma_device_type": 1 00:09:10.183 }, 00:09:10.183 { 00:09:10.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.183 "dma_device_type": 2 00:09:10.183 } 00:09:10.183 ], 00:09:10.183 "driver_specific": {} 00:09:10.183 } 00:09:10.183 ] 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.183 [2024-11-15 10:37:31.233783] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:10.183 [2024-11-15 10:37:31.233842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:10.183 [2024-11-15 10:37:31.233877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.183 [2024-11-15 10:37:31.236461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.183 "name": "Existed_Raid", 00:09:10.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.183 "strip_size_kb": 64, 00:09:10.183 "state": "configuring", 00:09:10.183 "raid_level": "concat", 00:09:10.183 "superblock": false, 00:09:10.183 "num_base_bdevs": 3, 00:09:10.183 "num_base_bdevs_discovered": 2, 00:09:10.183 "num_base_bdevs_operational": 3, 00:09:10.183 "base_bdevs_list": [ 00:09:10.183 { 00:09:10.183 "name": "BaseBdev1", 00:09:10.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.183 "is_configured": false, 00:09:10.183 "data_offset": 0, 00:09:10.183 "data_size": 0 00:09:10.183 }, 00:09:10.183 { 00:09:10.183 "name": "BaseBdev2", 00:09:10.183 "uuid": "304bc0b0-abc0-42ee-b22f-82567e2a1c64", 00:09:10.183 "is_configured": true, 00:09:10.183 "data_offset": 0, 00:09:10.183 "data_size": 65536 00:09:10.183 }, 00:09:10.183 { 00:09:10.183 "name": "BaseBdev3", 00:09:10.183 "uuid": "00cc9fa2-0334-4bbb-92ff-8c8377a07bbf", 00:09:10.183 "is_configured": true, 00:09:10.183 "data_offset": 0, 00:09:10.183 "data_size": 65536 00:09:10.183 } 00:09:10.183 ] 00:09:10.183 }' 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.183 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.750 [2024-11-15 10:37:31.762042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.750 "name": "Existed_Raid", 00:09:10.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.750 "strip_size_kb": 64, 00:09:10.750 "state": "configuring", 00:09:10.750 "raid_level": "concat", 00:09:10.750 "superblock": false, 00:09:10.750 "num_base_bdevs": 3, 00:09:10.750 "num_base_bdevs_discovered": 1, 00:09:10.750 "num_base_bdevs_operational": 3, 00:09:10.750 "base_bdevs_list": [ 00:09:10.750 { 00:09:10.750 "name": "BaseBdev1", 00:09:10.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.750 "is_configured": false, 00:09:10.750 "data_offset": 0, 00:09:10.750 "data_size": 0 00:09:10.750 }, 00:09:10.750 { 00:09:10.750 "name": null, 00:09:10.750 "uuid": "304bc0b0-abc0-42ee-b22f-82567e2a1c64", 00:09:10.750 "is_configured": false, 00:09:10.750 "data_offset": 0, 00:09:10.750 "data_size": 65536 00:09:10.750 }, 00:09:10.750 { 00:09:10.750 "name": "BaseBdev3", 00:09:10.750 "uuid": "00cc9fa2-0334-4bbb-92ff-8c8377a07bbf", 00:09:10.750 "is_configured": true, 00:09:10.750 "data_offset": 0, 00:09:10.750 "data_size": 65536 00:09:10.750 } 00:09:10.750 ] 00:09:10.750 }' 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.750 10:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.316 [2024-11-15 10:37:32.376461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.316 BaseBdev1 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.316 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.316 [ 00:09:11.316 { 00:09:11.316 "name": "BaseBdev1", 00:09:11.316 "aliases": [ 00:09:11.316 "0a01d21e-c736-49a4-b998-40fc8b97dfa0" 00:09:11.316 ], 00:09:11.316 "product_name": "Malloc disk", 00:09:11.316 "block_size": 512, 00:09:11.316 "num_blocks": 65536, 00:09:11.316 "uuid": "0a01d21e-c736-49a4-b998-40fc8b97dfa0", 00:09:11.316 "assigned_rate_limits": { 00:09:11.316 "rw_ios_per_sec": 0, 00:09:11.316 "rw_mbytes_per_sec": 0, 00:09:11.316 "r_mbytes_per_sec": 0, 00:09:11.316 "w_mbytes_per_sec": 0 00:09:11.316 }, 00:09:11.316 "claimed": true, 00:09:11.316 "claim_type": "exclusive_write", 00:09:11.317 "zoned": false, 00:09:11.317 "supported_io_types": { 00:09:11.317 "read": true, 00:09:11.317 "write": true, 00:09:11.317 "unmap": true, 00:09:11.317 "flush": true, 00:09:11.317 "reset": true, 00:09:11.317 "nvme_admin": false, 00:09:11.317 "nvme_io": false, 00:09:11.317 "nvme_io_md": false, 00:09:11.317 "write_zeroes": true, 00:09:11.317 "zcopy": true, 00:09:11.317 "get_zone_info": false, 00:09:11.317 "zone_management": false, 00:09:11.317 "zone_append": false, 00:09:11.317 "compare": false, 00:09:11.317 "compare_and_write": false, 00:09:11.317 "abort": true, 00:09:11.317 "seek_hole": false, 00:09:11.317 "seek_data": false, 00:09:11.317 "copy": true, 00:09:11.317 "nvme_iov_md": false 00:09:11.317 }, 00:09:11.317 "memory_domains": [ 00:09:11.317 { 00:09:11.317 "dma_device_id": "system", 00:09:11.317 "dma_device_type": 1 00:09:11.317 }, 00:09:11.317 { 00:09:11.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.317 "dma_device_type": 2 00:09:11.317 } 00:09:11.317 ], 00:09:11.317 "driver_specific": {} 00:09:11.317 } 00:09:11.317 ] 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.317 "name": "Existed_Raid", 00:09:11.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.317 "strip_size_kb": 64, 00:09:11.317 "state": "configuring", 00:09:11.317 "raid_level": "concat", 00:09:11.317 "superblock": false, 00:09:11.317 "num_base_bdevs": 3, 00:09:11.317 "num_base_bdevs_discovered": 2, 00:09:11.317 "num_base_bdevs_operational": 3, 00:09:11.317 "base_bdevs_list": [ 00:09:11.317 { 00:09:11.317 "name": "BaseBdev1", 00:09:11.317 "uuid": "0a01d21e-c736-49a4-b998-40fc8b97dfa0", 00:09:11.317 "is_configured": true, 00:09:11.317 "data_offset": 0, 00:09:11.317 "data_size": 65536 00:09:11.317 }, 00:09:11.317 { 00:09:11.317 "name": null, 00:09:11.317 "uuid": "304bc0b0-abc0-42ee-b22f-82567e2a1c64", 00:09:11.317 "is_configured": false, 00:09:11.317 "data_offset": 0, 00:09:11.317 "data_size": 65536 00:09:11.317 }, 00:09:11.317 { 00:09:11.317 "name": "BaseBdev3", 00:09:11.317 "uuid": "00cc9fa2-0334-4bbb-92ff-8c8377a07bbf", 00:09:11.317 "is_configured": true, 00:09:11.317 "data_offset": 0, 00:09:11.317 "data_size": 65536 00:09:11.317 } 00:09:11.317 ] 00:09:11.317 }' 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.317 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.888 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.888 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.888 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.888 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:11.888 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.888 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:11.888 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.889 [2024-11-15 10:37:32.972750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.889 10:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.889 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.889 "name": "Existed_Raid", 00:09:11.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.889 "strip_size_kb": 64, 00:09:11.889 "state": "configuring", 00:09:11.889 "raid_level": "concat", 00:09:11.889 "superblock": false, 00:09:11.889 "num_base_bdevs": 3, 00:09:11.889 "num_base_bdevs_discovered": 1, 00:09:11.889 "num_base_bdevs_operational": 3, 00:09:11.889 "base_bdevs_list": [ 00:09:11.889 { 00:09:11.889 "name": "BaseBdev1", 00:09:11.889 "uuid": "0a01d21e-c736-49a4-b998-40fc8b97dfa0", 00:09:11.889 "is_configured": true, 00:09:11.889 "data_offset": 0, 00:09:11.889 "data_size": 65536 00:09:11.889 }, 00:09:11.889 { 00:09:11.889 "name": null, 00:09:11.889 "uuid": "304bc0b0-abc0-42ee-b22f-82567e2a1c64", 00:09:11.889 "is_configured": false, 00:09:11.889 "data_offset": 0, 00:09:11.889 "data_size": 65536 00:09:11.889 }, 00:09:11.889 { 00:09:11.889 "name": null, 00:09:11.889 "uuid": "00cc9fa2-0334-4bbb-92ff-8c8377a07bbf", 00:09:11.889 "is_configured": false, 00:09:11.889 "data_offset": 0, 00:09:11.889 "data_size": 65536 00:09:11.889 } 00:09:11.889 ] 00:09:11.889 }' 00:09:11.889 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.889 10:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.455 [2024-11-15 10:37:33.524913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.455 "name": "Existed_Raid", 00:09:12.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.455 "strip_size_kb": 64, 00:09:12.455 "state": "configuring", 00:09:12.455 "raid_level": "concat", 00:09:12.455 "superblock": false, 00:09:12.455 "num_base_bdevs": 3, 00:09:12.455 "num_base_bdevs_discovered": 2, 00:09:12.455 "num_base_bdevs_operational": 3, 00:09:12.455 "base_bdevs_list": [ 00:09:12.455 { 00:09:12.455 "name": "BaseBdev1", 00:09:12.455 "uuid": "0a01d21e-c736-49a4-b998-40fc8b97dfa0", 00:09:12.455 "is_configured": true, 00:09:12.455 "data_offset": 0, 00:09:12.455 "data_size": 65536 00:09:12.455 }, 00:09:12.455 { 00:09:12.455 "name": null, 00:09:12.455 "uuid": "304bc0b0-abc0-42ee-b22f-82567e2a1c64", 00:09:12.455 "is_configured": false, 00:09:12.455 "data_offset": 0, 00:09:12.455 "data_size": 65536 00:09:12.455 }, 00:09:12.455 { 00:09:12.455 "name": "BaseBdev3", 00:09:12.455 "uuid": "00cc9fa2-0334-4bbb-92ff-8c8377a07bbf", 00:09:12.455 "is_configured": true, 00:09:12.455 "data_offset": 0, 00:09:12.455 "data_size": 65536 00:09:12.455 } 00:09:12.455 ] 00:09:12.455 }' 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.455 10:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.021 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.021 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:13.021 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.021 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.021 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.021 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:13.021 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:13.021 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.021 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.021 [2024-11-15 10:37:34.105025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.279 "name": "Existed_Raid", 00:09:13.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.279 "strip_size_kb": 64, 00:09:13.279 "state": "configuring", 00:09:13.279 "raid_level": "concat", 00:09:13.279 "superblock": false, 00:09:13.279 "num_base_bdevs": 3, 00:09:13.279 "num_base_bdevs_discovered": 1, 00:09:13.279 "num_base_bdevs_operational": 3, 00:09:13.279 "base_bdevs_list": [ 00:09:13.279 { 00:09:13.279 "name": null, 00:09:13.279 "uuid": "0a01d21e-c736-49a4-b998-40fc8b97dfa0", 00:09:13.279 "is_configured": false, 00:09:13.279 "data_offset": 0, 00:09:13.279 "data_size": 65536 00:09:13.279 }, 00:09:13.279 { 00:09:13.279 "name": null, 00:09:13.279 "uuid": "304bc0b0-abc0-42ee-b22f-82567e2a1c64", 00:09:13.279 "is_configured": false, 00:09:13.279 "data_offset": 0, 00:09:13.279 "data_size": 65536 00:09:13.279 }, 00:09:13.279 { 00:09:13.279 "name": "BaseBdev3", 00:09:13.279 "uuid": "00cc9fa2-0334-4bbb-92ff-8c8377a07bbf", 00:09:13.279 "is_configured": true, 00:09:13.279 "data_offset": 0, 00:09:13.279 "data_size": 65536 00:09:13.279 } 00:09:13.279 ] 00:09:13.279 }' 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.279 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.537 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.537 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.537 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.537 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.796 [2024-11-15 10:37:34.737296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.796 "name": "Existed_Raid", 00:09:13.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.796 "strip_size_kb": 64, 00:09:13.796 "state": "configuring", 00:09:13.796 "raid_level": "concat", 00:09:13.796 "superblock": false, 00:09:13.796 "num_base_bdevs": 3, 00:09:13.796 "num_base_bdevs_discovered": 2, 00:09:13.796 "num_base_bdevs_operational": 3, 00:09:13.796 "base_bdevs_list": [ 00:09:13.796 { 00:09:13.796 "name": null, 00:09:13.796 "uuid": "0a01d21e-c736-49a4-b998-40fc8b97dfa0", 00:09:13.796 "is_configured": false, 00:09:13.796 "data_offset": 0, 00:09:13.796 "data_size": 65536 00:09:13.796 }, 00:09:13.796 { 00:09:13.796 "name": "BaseBdev2", 00:09:13.796 "uuid": "304bc0b0-abc0-42ee-b22f-82567e2a1c64", 00:09:13.796 "is_configured": true, 00:09:13.796 "data_offset": 0, 00:09:13.796 "data_size": 65536 00:09:13.796 }, 00:09:13.796 { 00:09:13.796 "name": "BaseBdev3", 00:09:13.796 "uuid": "00cc9fa2-0334-4bbb-92ff-8c8377a07bbf", 00:09:13.796 "is_configured": true, 00:09:13.796 "data_offset": 0, 00:09:13.796 "data_size": 65536 00:09:13.796 } 00:09:13.796 ] 00:09:13.796 }' 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.796 10:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0a01d21e-c736-49a4-b998-40fc8b97dfa0 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.362 [2024-11-15 10:37:35.403476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:14.362 [2024-11-15 10:37:35.403551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:14.362 [2024-11-15 10:37:35.403567] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:14.362 [2024-11-15 10:37:35.403908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:14.362 [2024-11-15 10:37:35.404108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:14.362 [2024-11-15 10:37:35.404125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:14.362 [2024-11-15 10:37:35.404419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.362 NewBaseBdev 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.362 [ 00:09:14.362 { 00:09:14.362 "name": "NewBaseBdev", 00:09:14.362 "aliases": [ 00:09:14.362 "0a01d21e-c736-49a4-b998-40fc8b97dfa0" 00:09:14.362 ], 00:09:14.362 "product_name": "Malloc disk", 00:09:14.362 "block_size": 512, 00:09:14.362 "num_blocks": 65536, 00:09:14.362 "uuid": "0a01d21e-c736-49a4-b998-40fc8b97dfa0", 00:09:14.362 "assigned_rate_limits": { 00:09:14.362 "rw_ios_per_sec": 0, 00:09:14.362 "rw_mbytes_per_sec": 0, 00:09:14.362 "r_mbytes_per_sec": 0, 00:09:14.362 "w_mbytes_per_sec": 0 00:09:14.362 }, 00:09:14.362 "claimed": true, 00:09:14.362 "claim_type": "exclusive_write", 00:09:14.362 "zoned": false, 00:09:14.362 "supported_io_types": { 00:09:14.362 "read": true, 00:09:14.362 "write": true, 00:09:14.362 "unmap": true, 00:09:14.362 "flush": true, 00:09:14.362 "reset": true, 00:09:14.362 "nvme_admin": false, 00:09:14.362 "nvme_io": false, 00:09:14.362 "nvme_io_md": false, 00:09:14.362 "write_zeroes": true, 00:09:14.362 "zcopy": true, 00:09:14.362 "get_zone_info": false, 00:09:14.362 "zone_management": false, 00:09:14.362 "zone_append": false, 00:09:14.362 "compare": false, 00:09:14.362 "compare_and_write": false, 00:09:14.362 "abort": true, 00:09:14.362 "seek_hole": false, 00:09:14.362 "seek_data": false, 00:09:14.362 "copy": true, 00:09:14.362 "nvme_iov_md": false 00:09:14.362 }, 00:09:14.362 "memory_domains": [ 00:09:14.362 { 00:09:14.362 "dma_device_id": "system", 00:09:14.362 "dma_device_type": 1 00:09:14.362 }, 00:09:14.362 { 00:09:14.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.362 "dma_device_type": 2 00:09:14.362 } 00:09:14.362 ], 00:09:14.362 "driver_specific": {} 00:09:14.362 } 00:09:14.362 ] 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.362 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.363 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.363 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.363 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.363 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.363 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.363 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.363 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.363 "name": "Existed_Raid", 00:09:14.363 "uuid": "a78b7d4e-635f-44b5-b4fc-8c0f91e40d6d", 00:09:14.363 "strip_size_kb": 64, 00:09:14.363 "state": "online", 00:09:14.363 "raid_level": "concat", 00:09:14.363 "superblock": false, 00:09:14.363 "num_base_bdevs": 3, 00:09:14.363 "num_base_bdevs_discovered": 3, 00:09:14.363 "num_base_bdevs_operational": 3, 00:09:14.363 "base_bdevs_list": [ 00:09:14.363 { 00:09:14.363 "name": "NewBaseBdev", 00:09:14.363 "uuid": "0a01d21e-c736-49a4-b998-40fc8b97dfa0", 00:09:14.363 "is_configured": true, 00:09:14.363 "data_offset": 0, 00:09:14.363 "data_size": 65536 00:09:14.363 }, 00:09:14.363 { 00:09:14.363 "name": "BaseBdev2", 00:09:14.363 "uuid": "304bc0b0-abc0-42ee-b22f-82567e2a1c64", 00:09:14.363 "is_configured": true, 00:09:14.363 "data_offset": 0, 00:09:14.363 "data_size": 65536 00:09:14.363 }, 00:09:14.363 { 00:09:14.363 "name": "BaseBdev3", 00:09:14.363 "uuid": "00cc9fa2-0334-4bbb-92ff-8c8377a07bbf", 00:09:14.363 "is_configured": true, 00:09:14.363 "data_offset": 0, 00:09:14.363 "data_size": 65536 00:09:14.363 } 00:09:14.363 ] 00:09:14.363 }' 00:09:14.363 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.363 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.929 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:14.929 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:14.929 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.929 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.929 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.929 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.929 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:14.929 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.929 10:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.929 10:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.929 [2024-11-15 10:37:35.984222] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.929 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.929 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.929 "name": "Existed_Raid", 00:09:14.929 "aliases": [ 00:09:14.929 "a78b7d4e-635f-44b5-b4fc-8c0f91e40d6d" 00:09:14.929 ], 00:09:14.929 "product_name": "Raid Volume", 00:09:14.930 "block_size": 512, 00:09:14.930 "num_blocks": 196608, 00:09:14.930 "uuid": "a78b7d4e-635f-44b5-b4fc-8c0f91e40d6d", 00:09:14.930 "assigned_rate_limits": { 00:09:14.930 "rw_ios_per_sec": 0, 00:09:14.930 "rw_mbytes_per_sec": 0, 00:09:14.930 "r_mbytes_per_sec": 0, 00:09:14.930 "w_mbytes_per_sec": 0 00:09:14.930 }, 00:09:14.930 "claimed": false, 00:09:14.930 "zoned": false, 00:09:14.930 "supported_io_types": { 00:09:14.930 "read": true, 00:09:14.930 "write": true, 00:09:14.930 "unmap": true, 00:09:14.930 "flush": true, 00:09:14.930 "reset": true, 00:09:14.930 "nvme_admin": false, 00:09:14.930 "nvme_io": false, 00:09:14.930 "nvme_io_md": false, 00:09:14.930 "write_zeroes": true, 00:09:14.930 "zcopy": false, 00:09:14.930 "get_zone_info": false, 00:09:14.930 "zone_management": false, 00:09:14.930 "zone_append": false, 00:09:14.930 "compare": false, 00:09:14.930 "compare_and_write": false, 00:09:14.930 "abort": false, 00:09:14.930 "seek_hole": false, 00:09:14.930 "seek_data": false, 00:09:14.930 "copy": false, 00:09:14.930 "nvme_iov_md": false 00:09:14.930 }, 00:09:14.930 "memory_domains": [ 00:09:14.930 { 00:09:14.930 "dma_device_id": "system", 00:09:14.930 "dma_device_type": 1 00:09:14.930 }, 00:09:14.930 { 00:09:14.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.930 "dma_device_type": 2 00:09:14.930 }, 00:09:14.930 { 00:09:14.930 "dma_device_id": "system", 00:09:14.930 "dma_device_type": 1 00:09:14.930 }, 00:09:14.930 { 00:09:14.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.930 "dma_device_type": 2 00:09:14.930 }, 00:09:14.930 { 00:09:14.930 "dma_device_id": "system", 00:09:14.930 "dma_device_type": 1 00:09:14.930 }, 00:09:14.930 { 00:09:14.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.930 "dma_device_type": 2 00:09:14.930 } 00:09:14.930 ], 00:09:14.930 "driver_specific": { 00:09:14.930 "raid": { 00:09:14.930 "uuid": "a78b7d4e-635f-44b5-b4fc-8c0f91e40d6d", 00:09:14.930 "strip_size_kb": 64, 00:09:14.930 "state": "online", 00:09:14.930 "raid_level": "concat", 00:09:14.930 "superblock": false, 00:09:14.930 "num_base_bdevs": 3, 00:09:14.930 "num_base_bdevs_discovered": 3, 00:09:14.930 "num_base_bdevs_operational": 3, 00:09:14.930 "base_bdevs_list": [ 00:09:14.930 { 00:09:14.930 "name": "NewBaseBdev", 00:09:14.930 "uuid": "0a01d21e-c736-49a4-b998-40fc8b97dfa0", 00:09:14.930 "is_configured": true, 00:09:14.930 "data_offset": 0, 00:09:14.930 "data_size": 65536 00:09:14.930 }, 00:09:14.930 { 00:09:14.930 "name": "BaseBdev2", 00:09:14.930 "uuid": "304bc0b0-abc0-42ee-b22f-82567e2a1c64", 00:09:14.930 "is_configured": true, 00:09:14.930 "data_offset": 0, 00:09:14.930 "data_size": 65536 00:09:14.930 }, 00:09:14.930 { 00:09:14.930 "name": "BaseBdev3", 00:09:14.930 "uuid": "00cc9fa2-0334-4bbb-92ff-8c8377a07bbf", 00:09:14.930 "is_configured": true, 00:09:14.930 "data_offset": 0, 00:09:14.930 "data_size": 65536 00:09:14.930 } 00:09:14.930 ] 00:09:14.930 } 00:09:14.930 } 00:09:14.930 }' 00:09:14.930 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.930 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:14.930 BaseBdev2 00:09:14.930 BaseBdev3' 00:09:14.930 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.188 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.189 [2024-11-15 10:37:36.287941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.189 [2024-11-15 10:37:36.287976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.189 [2024-11-15 10:37:36.288093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.189 [2024-11-15 10:37:36.288170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.189 [2024-11-15 10:37:36.288190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65616 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65616 ']' 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65616 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65616 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.189 killing process with pid 65616 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65616' 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65616 00:09:15.189 [2024-11-15 10:37:36.327722] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.189 10:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65616 00:09:15.451 [2024-11-15 10:37:36.594996] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.832 ************************************ 00:09:16.832 END TEST raid_state_function_test 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:16.832 00:09:16.832 real 0m11.730s 00:09:16.832 user 0m19.532s 00:09:16.832 sys 0m1.514s 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.832 ************************************ 00:09:16.832 10:37:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:16.832 10:37:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:16.832 10:37:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.832 10:37:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.832 ************************************ 00:09:16.832 START TEST raid_state_function_test_sb 00:09:16.832 ************************************ 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.832 Process raid pid: 66247 00:09:16.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66247 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66247' 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66247 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66247 ']' 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.832 10:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.832 [2024-11-15 10:37:37.763509] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:09:16.832 [2024-11-15 10:37:37.763878] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.832 [2024-11-15 10:37:37.939884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.091 [2024-11-15 10:37:38.070233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.348 [2024-11-15 10:37:38.277719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.348 [2024-11-15 10:37:38.277983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.606 10:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.606 10:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:17.606 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.606 10:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.606 10:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.606 [2024-11-15 10:37:38.760576] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.606 [2024-11-15 10:37:38.761203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.606 [2024-11-15 10:37:38.761340] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.606 [2024-11-15 10:37:38.761533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.606 [2024-11-15 10:37:38.761651] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:17.606 [2024-11-15 10:37:38.761828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:17.606 10:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.864 "name": "Existed_Raid", 00:09:17.864 "uuid": "afeeaf80-ca8f-4449-b8fd-38a61a38147a", 00:09:17.864 "strip_size_kb": 64, 00:09:17.864 "state": "configuring", 00:09:17.864 "raid_level": "concat", 00:09:17.864 "superblock": true, 00:09:17.864 "num_base_bdevs": 3, 00:09:17.864 "num_base_bdevs_discovered": 0, 00:09:17.864 "num_base_bdevs_operational": 3, 00:09:17.864 "base_bdevs_list": [ 00:09:17.864 { 00:09:17.864 "name": "BaseBdev1", 00:09:17.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.864 "is_configured": false, 00:09:17.864 "data_offset": 0, 00:09:17.864 "data_size": 0 00:09:17.864 }, 00:09:17.864 { 00:09:17.864 "name": "BaseBdev2", 00:09:17.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.864 "is_configured": false, 00:09:17.864 "data_offset": 0, 00:09:17.864 "data_size": 0 00:09:17.864 }, 00:09:17.864 { 00:09:17.864 "name": "BaseBdev3", 00:09:17.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.864 "is_configured": false, 00:09:17.864 "data_offset": 0, 00:09:17.864 "data_size": 0 00:09:17.864 } 00:09:17.864 ] 00:09:17.864 }' 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.864 10:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.121 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.121 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.121 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.121 [2024-11-15 10:37:39.248631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.121 [2024-11-15 10:37:39.248690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:18.121 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.121 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.121 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.121 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.121 [2024-11-15 10:37:39.256620] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.121 [2024-11-15 10:37:39.256689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.121 [2024-11-15 10:37:39.256706] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.121 [2024-11-15 10:37:39.256728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.121 [2024-11-15 10:37:39.256738] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.121 [2024-11-15 10:37:39.256753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.121 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.121 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.121 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.121 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.379 [2024-11-15 10:37:39.301523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.379 BaseBdev1 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.379 [ 00:09:18.379 { 00:09:18.379 "name": "BaseBdev1", 00:09:18.379 "aliases": [ 00:09:18.379 "43e340d5-0659-4da4-80b6-24bb42f69567" 00:09:18.379 ], 00:09:18.379 "product_name": "Malloc disk", 00:09:18.379 "block_size": 512, 00:09:18.379 "num_blocks": 65536, 00:09:18.379 "uuid": "43e340d5-0659-4da4-80b6-24bb42f69567", 00:09:18.379 "assigned_rate_limits": { 00:09:18.379 "rw_ios_per_sec": 0, 00:09:18.379 "rw_mbytes_per_sec": 0, 00:09:18.379 "r_mbytes_per_sec": 0, 00:09:18.379 "w_mbytes_per_sec": 0 00:09:18.379 }, 00:09:18.379 "claimed": true, 00:09:18.379 "claim_type": "exclusive_write", 00:09:18.379 "zoned": false, 00:09:18.379 "supported_io_types": { 00:09:18.379 "read": true, 00:09:18.379 "write": true, 00:09:18.379 "unmap": true, 00:09:18.379 "flush": true, 00:09:18.379 "reset": true, 00:09:18.379 "nvme_admin": false, 00:09:18.379 "nvme_io": false, 00:09:18.379 "nvme_io_md": false, 00:09:18.379 "write_zeroes": true, 00:09:18.379 "zcopy": true, 00:09:18.379 "get_zone_info": false, 00:09:18.379 "zone_management": false, 00:09:18.379 "zone_append": false, 00:09:18.379 "compare": false, 00:09:18.379 "compare_and_write": false, 00:09:18.379 "abort": true, 00:09:18.379 "seek_hole": false, 00:09:18.379 "seek_data": false, 00:09:18.379 "copy": true, 00:09:18.379 "nvme_iov_md": false 00:09:18.379 }, 00:09:18.379 "memory_domains": [ 00:09:18.379 { 00:09:18.379 "dma_device_id": "system", 00:09:18.379 "dma_device_type": 1 00:09:18.379 }, 00:09:18.379 { 00:09:18.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.379 "dma_device_type": 2 00:09:18.379 } 00:09:18.379 ], 00:09:18.379 "driver_specific": {} 00:09:18.379 } 00:09:18.379 ] 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.379 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.380 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.380 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.380 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.380 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.380 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.380 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.380 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.380 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.380 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.380 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.380 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.380 "name": "Existed_Raid", 00:09:18.380 "uuid": "7260d237-4cf4-4219-b60f-f433e31b2a58", 00:09:18.380 "strip_size_kb": 64, 00:09:18.380 "state": "configuring", 00:09:18.380 "raid_level": "concat", 00:09:18.380 "superblock": true, 00:09:18.380 "num_base_bdevs": 3, 00:09:18.380 "num_base_bdevs_discovered": 1, 00:09:18.380 "num_base_bdevs_operational": 3, 00:09:18.380 "base_bdevs_list": [ 00:09:18.380 { 00:09:18.380 "name": "BaseBdev1", 00:09:18.380 "uuid": "43e340d5-0659-4da4-80b6-24bb42f69567", 00:09:18.380 "is_configured": true, 00:09:18.380 "data_offset": 2048, 00:09:18.380 "data_size": 63488 00:09:18.380 }, 00:09:18.380 { 00:09:18.380 "name": "BaseBdev2", 00:09:18.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.380 "is_configured": false, 00:09:18.380 "data_offset": 0, 00:09:18.380 "data_size": 0 00:09:18.380 }, 00:09:18.380 { 00:09:18.380 "name": "BaseBdev3", 00:09:18.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.380 "is_configured": false, 00:09:18.380 "data_offset": 0, 00:09:18.380 "data_size": 0 00:09:18.380 } 00:09:18.380 ] 00:09:18.380 }' 00:09:18.380 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.380 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.943 [2024-11-15 10:37:39.861708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.943 [2024-11-15 10:37:39.861786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.943 [2024-11-15 10:37:39.869786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.943 [2024-11-15 10:37:39.872285] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.943 [2024-11-15 10:37:39.872339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.943 [2024-11-15 10:37:39.872356] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.943 [2024-11-15 10:37:39.872371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.943 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.943 "name": "Existed_Raid", 00:09:18.943 "uuid": "583c731b-a2af-4b3f-8338-0302c2f17f02", 00:09:18.944 "strip_size_kb": 64, 00:09:18.944 "state": "configuring", 00:09:18.944 "raid_level": "concat", 00:09:18.944 "superblock": true, 00:09:18.944 "num_base_bdevs": 3, 00:09:18.944 "num_base_bdevs_discovered": 1, 00:09:18.944 "num_base_bdevs_operational": 3, 00:09:18.944 "base_bdevs_list": [ 00:09:18.944 { 00:09:18.944 "name": "BaseBdev1", 00:09:18.944 "uuid": "43e340d5-0659-4da4-80b6-24bb42f69567", 00:09:18.944 "is_configured": true, 00:09:18.944 "data_offset": 2048, 00:09:18.944 "data_size": 63488 00:09:18.944 }, 00:09:18.944 { 00:09:18.944 "name": "BaseBdev2", 00:09:18.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.944 "is_configured": false, 00:09:18.944 "data_offset": 0, 00:09:18.944 "data_size": 0 00:09:18.944 }, 00:09:18.944 { 00:09:18.944 "name": "BaseBdev3", 00:09:18.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.944 "is_configured": false, 00:09:18.944 "data_offset": 0, 00:09:18.944 "data_size": 0 00:09:18.944 } 00:09:18.944 ] 00:09:18.944 }' 00:09:18.944 10:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.944 10:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.514 [2024-11-15 10:37:40.436976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.514 BaseBdev2 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:19.514 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.515 [ 00:09:19.515 { 00:09:19.515 "name": "BaseBdev2", 00:09:19.515 "aliases": [ 00:09:19.515 "8e2e0ff9-af5d-49fc-86f2-4863d72649f4" 00:09:19.515 ], 00:09:19.515 "product_name": "Malloc disk", 00:09:19.515 "block_size": 512, 00:09:19.515 "num_blocks": 65536, 00:09:19.515 "uuid": "8e2e0ff9-af5d-49fc-86f2-4863d72649f4", 00:09:19.515 "assigned_rate_limits": { 00:09:19.515 "rw_ios_per_sec": 0, 00:09:19.515 "rw_mbytes_per_sec": 0, 00:09:19.515 "r_mbytes_per_sec": 0, 00:09:19.515 "w_mbytes_per_sec": 0 00:09:19.515 }, 00:09:19.515 "claimed": true, 00:09:19.515 "claim_type": "exclusive_write", 00:09:19.515 "zoned": false, 00:09:19.515 "supported_io_types": { 00:09:19.515 "read": true, 00:09:19.515 "write": true, 00:09:19.515 "unmap": true, 00:09:19.515 "flush": true, 00:09:19.515 "reset": true, 00:09:19.515 "nvme_admin": false, 00:09:19.515 "nvme_io": false, 00:09:19.515 "nvme_io_md": false, 00:09:19.515 "write_zeroes": true, 00:09:19.515 "zcopy": true, 00:09:19.515 "get_zone_info": false, 00:09:19.515 "zone_management": false, 00:09:19.515 "zone_append": false, 00:09:19.515 "compare": false, 00:09:19.515 "compare_and_write": false, 00:09:19.515 "abort": true, 00:09:19.515 "seek_hole": false, 00:09:19.515 "seek_data": false, 00:09:19.515 "copy": true, 00:09:19.515 "nvme_iov_md": false 00:09:19.515 }, 00:09:19.515 "memory_domains": [ 00:09:19.515 { 00:09:19.515 "dma_device_id": "system", 00:09:19.515 "dma_device_type": 1 00:09:19.515 }, 00:09:19.515 { 00:09:19.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.515 "dma_device_type": 2 00:09:19.515 } 00:09:19.515 ], 00:09:19.515 "driver_specific": {} 00:09:19.515 } 00:09:19.515 ] 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.515 "name": "Existed_Raid", 00:09:19.515 "uuid": "583c731b-a2af-4b3f-8338-0302c2f17f02", 00:09:19.515 "strip_size_kb": 64, 00:09:19.515 "state": "configuring", 00:09:19.515 "raid_level": "concat", 00:09:19.515 "superblock": true, 00:09:19.515 "num_base_bdevs": 3, 00:09:19.515 "num_base_bdevs_discovered": 2, 00:09:19.515 "num_base_bdevs_operational": 3, 00:09:19.515 "base_bdevs_list": [ 00:09:19.515 { 00:09:19.515 "name": "BaseBdev1", 00:09:19.515 "uuid": "43e340d5-0659-4da4-80b6-24bb42f69567", 00:09:19.515 "is_configured": true, 00:09:19.515 "data_offset": 2048, 00:09:19.515 "data_size": 63488 00:09:19.515 }, 00:09:19.515 { 00:09:19.515 "name": "BaseBdev2", 00:09:19.515 "uuid": "8e2e0ff9-af5d-49fc-86f2-4863d72649f4", 00:09:19.515 "is_configured": true, 00:09:19.515 "data_offset": 2048, 00:09:19.515 "data_size": 63488 00:09:19.515 }, 00:09:19.515 { 00:09:19.515 "name": "BaseBdev3", 00:09:19.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.515 "is_configured": false, 00:09:19.515 "data_offset": 0, 00:09:19.515 "data_size": 0 00:09:19.515 } 00:09:19.515 ] 00:09:19.515 }' 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.515 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.081 10:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:20.081 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.081 10:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.081 [2024-11-15 10:37:41.038197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.081 [2024-11-15 10:37:41.038541] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:20.081 [2024-11-15 10:37:41.038575] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:20.081 BaseBdev3 00:09:20.081 [2024-11-15 10:37:41.038918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:20.081 [2024-11-15 10:37:41.039142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:20.081 [2024-11-15 10:37:41.039161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:20.081 [2024-11-15 10:37:41.039354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.081 [ 00:09:20.081 { 00:09:20.081 "name": "BaseBdev3", 00:09:20.081 "aliases": [ 00:09:20.081 "d2b0d661-58a9-4a62-87af-e710092c269b" 00:09:20.081 ], 00:09:20.081 "product_name": "Malloc disk", 00:09:20.081 "block_size": 512, 00:09:20.081 "num_blocks": 65536, 00:09:20.081 "uuid": "d2b0d661-58a9-4a62-87af-e710092c269b", 00:09:20.081 "assigned_rate_limits": { 00:09:20.081 "rw_ios_per_sec": 0, 00:09:20.081 "rw_mbytes_per_sec": 0, 00:09:20.081 "r_mbytes_per_sec": 0, 00:09:20.081 "w_mbytes_per_sec": 0 00:09:20.081 }, 00:09:20.081 "claimed": true, 00:09:20.081 "claim_type": "exclusive_write", 00:09:20.081 "zoned": false, 00:09:20.081 "supported_io_types": { 00:09:20.081 "read": true, 00:09:20.081 "write": true, 00:09:20.081 "unmap": true, 00:09:20.081 "flush": true, 00:09:20.081 "reset": true, 00:09:20.081 "nvme_admin": false, 00:09:20.081 "nvme_io": false, 00:09:20.081 "nvme_io_md": false, 00:09:20.081 "write_zeroes": true, 00:09:20.081 "zcopy": true, 00:09:20.081 "get_zone_info": false, 00:09:20.081 "zone_management": false, 00:09:20.081 "zone_append": false, 00:09:20.081 "compare": false, 00:09:20.081 "compare_and_write": false, 00:09:20.081 "abort": true, 00:09:20.081 "seek_hole": false, 00:09:20.081 "seek_data": false, 00:09:20.081 "copy": true, 00:09:20.081 "nvme_iov_md": false 00:09:20.081 }, 00:09:20.081 "memory_domains": [ 00:09:20.081 { 00:09:20.081 "dma_device_id": "system", 00:09:20.081 "dma_device_type": 1 00:09:20.081 }, 00:09:20.081 { 00:09:20.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.081 "dma_device_type": 2 00:09:20.081 } 00:09:20.081 ], 00:09:20.081 "driver_specific": {} 00:09:20.081 } 00:09:20.081 ] 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.081 "name": "Existed_Raid", 00:09:20.081 "uuid": "583c731b-a2af-4b3f-8338-0302c2f17f02", 00:09:20.081 "strip_size_kb": 64, 00:09:20.081 "state": "online", 00:09:20.081 "raid_level": "concat", 00:09:20.081 "superblock": true, 00:09:20.081 "num_base_bdevs": 3, 00:09:20.081 "num_base_bdevs_discovered": 3, 00:09:20.081 "num_base_bdevs_operational": 3, 00:09:20.081 "base_bdevs_list": [ 00:09:20.081 { 00:09:20.081 "name": "BaseBdev1", 00:09:20.081 "uuid": "43e340d5-0659-4da4-80b6-24bb42f69567", 00:09:20.081 "is_configured": true, 00:09:20.081 "data_offset": 2048, 00:09:20.081 "data_size": 63488 00:09:20.081 }, 00:09:20.081 { 00:09:20.081 "name": "BaseBdev2", 00:09:20.081 "uuid": "8e2e0ff9-af5d-49fc-86f2-4863d72649f4", 00:09:20.081 "is_configured": true, 00:09:20.081 "data_offset": 2048, 00:09:20.081 "data_size": 63488 00:09:20.081 }, 00:09:20.081 { 00:09:20.081 "name": "BaseBdev3", 00:09:20.081 "uuid": "d2b0d661-58a9-4a62-87af-e710092c269b", 00:09:20.081 "is_configured": true, 00:09:20.081 "data_offset": 2048, 00:09:20.081 "data_size": 63488 00:09:20.081 } 00:09:20.081 ] 00:09:20.081 }' 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.081 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.647 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:20.647 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:20.647 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.647 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.647 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.647 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.647 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:20.647 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.647 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.647 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.647 [2024-11-15 10:37:41.566790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.647 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.647 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.647 "name": "Existed_Raid", 00:09:20.647 "aliases": [ 00:09:20.647 "583c731b-a2af-4b3f-8338-0302c2f17f02" 00:09:20.647 ], 00:09:20.647 "product_name": "Raid Volume", 00:09:20.647 "block_size": 512, 00:09:20.647 "num_blocks": 190464, 00:09:20.647 "uuid": "583c731b-a2af-4b3f-8338-0302c2f17f02", 00:09:20.647 "assigned_rate_limits": { 00:09:20.647 "rw_ios_per_sec": 0, 00:09:20.647 "rw_mbytes_per_sec": 0, 00:09:20.647 "r_mbytes_per_sec": 0, 00:09:20.647 "w_mbytes_per_sec": 0 00:09:20.647 }, 00:09:20.647 "claimed": false, 00:09:20.647 "zoned": false, 00:09:20.647 "supported_io_types": { 00:09:20.647 "read": true, 00:09:20.647 "write": true, 00:09:20.647 "unmap": true, 00:09:20.647 "flush": true, 00:09:20.647 "reset": true, 00:09:20.647 "nvme_admin": false, 00:09:20.647 "nvme_io": false, 00:09:20.647 "nvme_io_md": false, 00:09:20.647 "write_zeroes": true, 00:09:20.647 "zcopy": false, 00:09:20.647 "get_zone_info": false, 00:09:20.647 "zone_management": false, 00:09:20.647 "zone_append": false, 00:09:20.647 "compare": false, 00:09:20.647 "compare_and_write": false, 00:09:20.647 "abort": false, 00:09:20.647 "seek_hole": false, 00:09:20.647 "seek_data": false, 00:09:20.647 "copy": false, 00:09:20.647 "nvme_iov_md": false 00:09:20.647 }, 00:09:20.647 "memory_domains": [ 00:09:20.647 { 00:09:20.647 "dma_device_id": "system", 00:09:20.647 "dma_device_type": 1 00:09:20.647 }, 00:09:20.647 { 00:09:20.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.647 "dma_device_type": 2 00:09:20.647 }, 00:09:20.647 { 00:09:20.647 "dma_device_id": "system", 00:09:20.647 "dma_device_type": 1 00:09:20.647 }, 00:09:20.647 { 00:09:20.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.647 "dma_device_type": 2 00:09:20.647 }, 00:09:20.647 { 00:09:20.647 "dma_device_id": "system", 00:09:20.647 "dma_device_type": 1 00:09:20.647 }, 00:09:20.647 { 00:09:20.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.647 "dma_device_type": 2 00:09:20.647 } 00:09:20.647 ], 00:09:20.647 "driver_specific": { 00:09:20.647 "raid": { 00:09:20.647 "uuid": "583c731b-a2af-4b3f-8338-0302c2f17f02", 00:09:20.647 "strip_size_kb": 64, 00:09:20.647 "state": "online", 00:09:20.647 "raid_level": "concat", 00:09:20.647 "superblock": true, 00:09:20.647 "num_base_bdevs": 3, 00:09:20.647 "num_base_bdevs_discovered": 3, 00:09:20.647 "num_base_bdevs_operational": 3, 00:09:20.647 "base_bdevs_list": [ 00:09:20.647 { 00:09:20.647 "name": "BaseBdev1", 00:09:20.647 "uuid": "43e340d5-0659-4da4-80b6-24bb42f69567", 00:09:20.647 "is_configured": true, 00:09:20.647 "data_offset": 2048, 00:09:20.647 "data_size": 63488 00:09:20.647 }, 00:09:20.647 { 00:09:20.647 "name": "BaseBdev2", 00:09:20.647 "uuid": "8e2e0ff9-af5d-49fc-86f2-4863d72649f4", 00:09:20.647 "is_configured": true, 00:09:20.647 "data_offset": 2048, 00:09:20.647 "data_size": 63488 00:09:20.647 }, 00:09:20.647 { 00:09:20.647 "name": "BaseBdev3", 00:09:20.647 "uuid": "d2b0d661-58a9-4a62-87af-e710092c269b", 00:09:20.647 "is_configured": true, 00:09:20.647 "data_offset": 2048, 00:09:20.647 "data_size": 63488 00:09:20.647 } 00:09:20.647 ] 00:09:20.647 } 00:09:20.647 } 00:09:20.647 }' 00:09:20.647 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:20.648 BaseBdev2 00:09:20.648 BaseBdev3' 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.648 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.906 [2024-11-15 10:37:41.866548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.906 [2024-11-15 10:37:41.866593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.906 [2024-11-15 10:37:41.866664] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.906 10:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.906 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.906 "name": "Existed_Raid", 00:09:20.906 "uuid": "583c731b-a2af-4b3f-8338-0302c2f17f02", 00:09:20.906 "strip_size_kb": 64, 00:09:20.906 "state": "offline", 00:09:20.906 "raid_level": "concat", 00:09:20.906 "superblock": true, 00:09:20.906 "num_base_bdevs": 3, 00:09:20.906 "num_base_bdevs_discovered": 2, 00:09:20.906 "num_base_bdevs_operational": 2, 00:09:20.906 "base_bdevs_list": [ 00:09:20.906 { 00:09:20.906 "name": null, 00:09:20.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.906 "is_configured": false, 00:09:20.906 "data_offset": 0, 00:09:20.906 "data_size": 63488 00:09:20.906 }, 00:09:20.906 { 00:09:20.906 "name": "BaseBdev2", 00:09:20.906 "uuid": "8e2e0ff9-af5d-49fc-86f2-4863d72649f4", 00:09:20.906 "is_configured": true, 00:09:20.906 "data_offset": 2048, 00:09:20.906 "data_size": 63488 00:09:20.906 }, 00:09:20.906 { 00:09:20.906 "name": "BaseBdev3", 00:09:20.906 "uuid": "d2b0d661-58a9-4a62-87af-e710092c269b", 00:09:20.906 "is_configured": true, 00:09:20.906 "data_offset": 2048, 00:09:20.906 "data_size": 63488 00:09:20.906 } 00:09:20.906 ] 00:09:20.906 }' 00:09:20.906 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.907 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.527 [2024-11-15 10:37:42.534557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.527 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.809 [2024-11-15 10:37:42.678890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:21.809 [2024-11-15 10:37:42.678956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.809 BaseBdev2 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.809 [ 00:09:21.809 { 00:09:21.809 "name": "BaseBdev2", 00:09:21.809 "aliases": [ 00:09:21.809 "2403ae00-a546-4838-934b-ff60e0709d99" 00:09:21.809 ], 00:09:21.809 "product_name": "Malloc disk", 00:09:21.809 "block_size": 512, 00:09:21.809 "num_blocks": 65536, 00:09:21.809 "uuid": "2403ae00-a546-4838-934b-ff60e0709d99", 00:09:21.809 "assigned_rate_limits": { 00:09:21.809 "rw_ios_per_sec": 0, 00:09:21.809 "rw_mbytes_per_sec": 0, 00:09:21.809 "r_mbytes_per_sec": 0, 00:09:21.809 "w_mbytes_per_sec": 0 00:09:21.809 }, 00:09:21.809 "claimed": false, 00:09:21.809 "zoned": false, 00:09:21.809 "supported_io_types": { 00:09:21.809 "read": true, 00:09:21.809 "write": true, 00:09:21.809 "unmap": true, 00:09:21.809 "flush": true, 00:09:21.809 "reset": true, 00:09:21.809 "nvme_admin": false, 00:09:21.809 "nvme_io": false, 00:09:21.809 "nvme_io_md": false, 00:09:21.809 "write_zeroes": true, 00:09:21.809 "zcopy": true, 00:09:21.809 "get_zone_info": false, 00:09:21.809 "zone_management": false, 00:09:21.809 "zone_append": false, 00:09:21.809 "compare": false, 00:09:21.809 "compare_and_write": false, 00:09:21.809 "abort": true, 00:09:21.809 "seek_hole": false, 00:09:21.809 "seek_data": false, 00:09:21.809 "copy": true, 00:09:21.809 "nvme_iov_md": false 00:09:21.809 }, 00:09:21.809 "memory_domains": [ 00:09:21.809 { 00:09:21.809 "dma_device_id": "system", 00:09:21.809 "dma_device_type": 1 00:09:21.809 }, 00:09:21.809 { 00:09:21.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.809 "dma_device_type": 2 00:09:21.809 } 00:09:21.809 ], 00:09:21.809 "driver_specific": {} 00:09:21.809 } 00:09:21.809 ] 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.809 BaseBdev3 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.809 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.809 [ 00:09:21.809 { 00:09:21.809 "name": "BaseBdev3", 00:09:21.809 "aliases": [ 00:09:21.809 "3774f728-285b-4c46-aa6a-70678c79afe7" 00:09:21.809 ], 00:09:21.809 "product_name": "Malloc disk", 00:09:21.809 "block_size": 512, 00:09:21.809 "num_blocks": 65536, 00:09:21.809 "uuid": "3774f728-285b-4c46-aa6a-70678c79afe7", 00:09:21.809 "assigned_rate_limits": { 00:09:21.809 "rw_ios_per_sec": 0, 00:09:21.809 "rw_mbytes_per_sec": 0, 00:09:21.809 "r_mbytes_per_sec": 0, 00:09:21.809 "w_mbytes_per_sec": 0 00:09:21.809 }, 00:09:21.809 "claimed": false, 00:09:21.809 "zoned": false, 00:09:21.809 "supported_io_types": { 00:09:21.809 "read": true, 00:09:21.809 "write": true, 00:09:21.809 "unmap": true, 00:09:21.809 "flush": true, 00:09:21.809 "reset": true, 00:09:21.809 "nvme_admin": false, 00:09:21.809 "nvme_io": false, 00:09:21.809 "nvme_io_md": false, 00:09:21.809 "write_zeroes": true, 00:09:21.809 "zcopy": true, 00:09:21.809 "get_zone_info": false, 00:09:21.809 "zone_management": false, 00:09:21.809 "zone_append": false, 00:09:21.810 "compare": false, 00:09:21.810 "compare_and_write": false, 00:09:21.810 "abort": true, 00:09:21.810 "seek_hole": false, 00:09:21.810 "seek_data": false, 00:09:21.810 "copy": true, 00:09:21.810 "nvme_iov_md": false 00:09:21.810 }, 00:09:21.810 "memory_domains": [ 00:09:21.810 { 00:09:21.810 "dma_device_id": "system", 00:09:21.810 "dma_device_type": 1 00:09:21.810 }, 00:09:21.810 { 00:09:21.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.810 "dma_device_type": 2 00:09:21.810 } 00:09:22.067 ], 00:09:22.067 "driver_specific": {} 00:09:22.067 } 00:09:22.067 ] 00:09:22.067 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.067 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:22.067 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:22.067 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:22.067 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.067 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.067 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.067 [2024-11-15 10:37:42.975940] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.067 [2024-11-15 10:37:42.976134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.067 [2024-11-15 10:37:42.976301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.067 [2024-11-15 10:37:42.978743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.067 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.067 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.067 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.067 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.067 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.067 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.067 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.068 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.068 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.068 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.068 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.068 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.068 10:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.068 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.068 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.068 10:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.068 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.068 "name": "Existed_Raid", 00:09:22.068 "uuid": "d27bc7c6-09ac-454c-8176-db6596987e38", 00:09:22.068 "strip_size_kb": 64, 00:09:22.068 "state": "configuring", 00:09:22.068 "raid_level": "concat", 00:09:22.068 "superblock": true, 00:09:22.068 "num_base_bdevs": 3, 00:09:22.068 "num_base_bdevs_discovered": 2, 00:09:22.068 "num_base_bdevs_operational": 3, 00:09:22.068 "base_bdevs_list": [ 00:09:22.068 { 00:09:22.068 "name": "BaseBdev1", 00:09:22.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.068 "is_configured": false, 00:09:22.068 "data_offset": 0, 00:09:22.068 "data_size": 0 00:09:22.068 }, 00:09:22.068 { 00:09:22.068 "name": "BaseBdev2", 00:09:22.068 "uuid": "2403ae00-a546-4838-934b-ff60e0709d99", 00:09:22.068 "is_configured": true, 00:09:22.068 "data_offset": 2048, 00:09:22.068 "data_size": 63488 00:09:22.068 }, 00:09:22.068 { 00:09:22.068 "name": "BaseBdev3", 00:09:22.068 "uuid": "3774f728-285b-4c46-aa6a-70678c79afe7", 00:09:22.068 "is_configured": true, 00:09:22.068 "data_offset": 2048, 00:09:22.068 "data_size": 63488 00:09:22.068 } 00:09:22.068 ] 00:09:22.068 }' 00:09:22.068 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.068 10:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.633 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:22.633 10:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.634 [2024-11-15 10:37:43.488098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.634 "name": "Existed_Raid", 00:09:22.634 "uuid": "d27bc7c6-09ac-454c-8176-db6596987e38", 00:09:22.634 "strip_size_kb": 64, 00:09:22.634 "state": "configuring", 00:09:22.634 "raid_level": "concat", 00:09:22.634 "superblock": true, 00:09:22.634 "num_base_bdevs": 3, 00:09:22.634 "num_base_bdevs_discovered": 1, 00:09:22.634 "num_base_bdevs_operational": 3, 00:09:22.634 "base_bdevs_list": [ 00:09:22.634 { 00:09:22.634 "name": "BaseBdev1", 00:09:22.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.634 "is_configured": false, 00:09:22.634 "data_offset": 0, 00:09:22.634 "data_size": 0 00:09:22.634 }, 00:09:22.634 { 00:09:22.634 "name": null, 00:09:22.634 "uuid": "2403ae00-a546-4838-934b-ff60e0709d99", 00:09:22.634 "is_configured": false, 00:09:22.634 "data_offset": 0, 00:09:22.634 "data_size": 63488 00:09:22.634 }, 00:09:22.634 { 00:09:22.634 "name": "BaseBdev3", 00:09:22.634 "uuid": "3774f728-285b-4c46-aa6a-70678c79afe7", 00:09:22.634 "is_configured": true, 00:09:22.634 "data_offset": 2048, 00:09:22.634 "data_size": 63488 00:09:22.634 } 00:09:22.634 ] 00:09:22.634 }' 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.634 10:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.891 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:22.891 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.891 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.891 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.891 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.149 [2024-11-15 10:37:44.103052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.149 BaseBdev1 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.149 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.149 [ 00:09:23.149 { 00:09:23.149 "name": "BaseBdev1", 00:09:23.150 "aliases": [ 00:09:23.150 "ef6626e6-6eb6-4aee-acaf-db13810fce79" 00:09:23.150 ], 00:09:23.150 "product_name": "Malloc disk", 00:09:23.150 "block_size": 512, 00:09:23.150 "num_blocks": 65536, 00:09:23.150 "uuid": "ef6626e6-6eb6-4aee-acaf-db13810fce79", 00:09:23.150 "assigned_rate_limits": { 00:09:23.150 "rw_ios_per_sec": 0, 00:09:23.150 "rw_mbytes_per_sec": 0, 00:09:23.150 "r_mbytes_per_sec": 0, 00:09:23.150 "w_mbytes_per_sec": 0 00:09:23.150 }, 00:09:23.150 "claimed": true, 00:09:23.150 "claim_type": "exclusive_write", 00:09:23.150 "zoned": false, 00:09:23.150 "supported_io_types": { 00:09:23.150 "read": true, 00:09:23.150 "write": true, 00:09:23.150 "unmap": true, 00:09:23.150 "flush": true, 00:09:23.150 "reset": true, 00:09:23.150 "nvme_admin": false, 00:09:23.150 "nvme_io": false, 00:09:23.150 "nvme_io_md": false, 00:09:23.150 "write_zeroes": true, 00:09:23.150 "zcopy": true, 00:09:23.150 "get_zone_info": false, 00:09:23.150 "zone_management": false, 00:09:23.150 "zone_append": false, 00:09:23.150 "compare": false, 00:09:23.150 "compare_and_write": false, 00:09:23.150 "abort": true, 00:09:23.150 "seek_hole": false, 00:09:23.150 "seek_data": false, 00:09:23.150 "copy": true, 00:09:23.150 "nvme_iov_md": false 00:09:23.150 }, 00:09:23.150 "memory_domains": [ 00:09:23.150 { 00:09:23.150 "dma_device_id": "system", 00:09:23.150 "dma_device_type": 1 00:09:23.150 }, 00:09:23.150 { 00:09:23.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.150 "dma_device_type": 2 00:09:23.150 } 00:09:23.150 ], 00:09:23.150 "driver_specific": {} 00:09:23.150 } 00:09:23.150 ] 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.150 "name": "Existed_Raid", 00:09:23.150 "uuid": "d27bc7c6-09ac-454c-8176-db6596987e38", 00:09:23.150 "strip_size_kb": 64, 00:09:23.150 "state": "configuring", 00:09:23.150 "raid_level": "concat", 00:09:23.150 "superblock": true, 00:09:23.150 "num_base_bdevs": 3, 00:09:23.150 "num_base_bdevs_discovered": 2, 00:09:23.150 "num_base_bdevs_operational": 3, 00:09:23.150 "base_bdevs_list": [ 00:09:23.150 { 00:09:23.150 "name": "BaseBdev1", 00:09:23.150 "uuid": "ef6626e6-6eb6-4aee-acaf-db13810fce79", 00:09:23.150 "is_configured": true, 00:09:23.150 "data_offset": 2048, 00:09:23.150 "data_size": 63488 00:09:23.150 }, 00:09:23.150 { 00:09:23.150 "name": null, 00:09:23.150 "uuid": "2403ae00-a546-4838-934b-ff60e0709d99", 00:09:23.150 "is_configured": false, 00:09:23.150 "data_offset": 0, 00:09:23.150 "data_size": 63488 00:09:23.150 }, 00:09:23.150 { 00:09:23.150 "name": "BaseBdev3", 00:09:23.150 "uuid": "3774f728-285b-4c46-aa6a-70678c79afe7", 00:09:23.150 "is_configured": true, 00:09:23.150 "data_offset": 2048, 00:09:23.150 "data_size": 63488 00:09:23.150 } 00:09:23.150 ] 00:09:23.150 }' 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.150 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.714 [2024-11-15 10:37:44.711283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.714 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.715 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.715 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.715 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.715 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.715 "name": "Existed_Raid", 00:09:23.715 "uuid": "d27bc7c6-09ac-454c-8176-db6596987e38", 00:09:23.715 "strip_size_kb": 64, 00:09:23.715 "state": "configuring", 00:09:23.715 "raid_level": "concat", 00:09:23.715 "superblock": true, 00:09:23.715 "num_base_bdevs": 3, 00:09:23.715 "num_base_bdevs_discovered": 1, 00:09:23.715 "num_base_bdevs_operational": 3, 00:09:23.715 "base_bdevs_list": [ 00:09:23.715 { 00:09:23.715 "name": "BaseBdev1", 00:09:23.715 "uuid": "ef6626e6-6eb6-4aee-acaf-db13810fce79", 00:09:23.715 "is_configured": true, 00:09:23.715 "data_offset": 2048, 00:09:23.715 "data_size": 63488 00:09:23.715 }, 00:09:23.715 { 00:09:23.715 "name": null, 00:09:23.715 "uuid": "2403ae00-a546-4838-934b-ff60e0709d99", 00:09:23.715 "is_configured": false, 00:09:23.715 "data_offset": 0, 00:09:23.715 "data_size": 63488 00:09:23.715 }, 00:09:23.715 { 00:09:23.715 "name": null, 00:09:23.715 "uuid": "3774f728-285b-4c46-aa6a-70678c79afe7", 00:09:23.715 "is_configured": false, 00:09:23.715 "data_offset": 0, 00:09:23.715 "data_size": 63488 00:09:23.715 } 00:09:23.715 ] 00:09:23.715 }' 00:09:23.715 10:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.715 10:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.280 [2024-11-15 10:37:45.291547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.280 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.280 "name": "Existed_Raid", 00:09:24.280 "uuid": "d27bc7c6-09ac-454c-8176-db6596987e38", 00:09:24.280 "strip_size_kb": 64, 00:09:24.280 "state": "configuring", 00:09:24.280 "raid_level": "concat", 00:09:24.280 "superblock": true, 00:09:24.280 "num_base_bdevs": 3, 00:09:24.280 "num_base_bdevs_discovered": 2, 00:09:24.280 "num_base_bdevs_operational": 3, 00:09:24.280 "base_bdevs_list": [ 00:09:24.280 { 00:09:24.280 "name": "BaseBdev1", 00:09:24.280 "uuid": "ef6626e6-6eb6-4aee-acaf-db13810fce79", 00:09:24.280 "is_configured": true, 00:09:24.280 "data_offset": 2048, 00:09:24.280 "data_size": 63488 00:09:24.281 }, 00:09:24.281 { 00:09:24.281 "name": null, 00:09:24.281 "uuid": "2403ae00-a546-4838-934b-ff60e0709d99", 00:09:24.281 "is_configured": false, 00:09:24.281 "data_offset": 0, 00:09:24.281 "data_size": 63488 00:09:24.281 }, 00:09:24.281 { 00:09:24.281 "name": "BaseBdev3", 00:09:24.281 "uuid": "3774f728-285b-4c46-aa6a-70678c79afe7", 00:09:24.281 "is_configured": true, 00:09:24.281 "data_offset": 2048, 00:09:24.281 "data_size": 63488 00:09:24.281 } 00:09:24.281 ] 00:09:24.281 }' 00:09:24.281 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.281 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.846 [2024-11-15 10:37:45.843683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.846 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.847 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.847 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.847 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.847 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.847 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.847 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.847 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.847 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.847 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.847 "name": "Existed_Raid", 00:09:24.847 "uuid": "d27bc7c6-09ac-454c-8176-db6596987e38", 00:09:24.847 "strip_size_kb": 64, 00:09:24.847 "state": "configuring", 00:09:24.847 "raid_level": "concat", 00:09:24.847 "superblock": true, 00:09:24.847 "num_base_bdevs": 3, 00:09:24.847 "num_base_bdevs_discovered": 1, 00:09:24.847 "num_base_bdevs_operational": 3, 00:09:24.847 "base_bdevs_list": [ 00:09:24.847 { 00:09:24.847 "name": null, 00:09:24.847 "uuid": "ef6626e6-6eb6-4aee-acaf-db13810fce79", 00:09:24.847 "is_configured": false, 00:09:24.847 "data_offset": 0, 00:09:24.847 "data_size": 63488 00:09:24.847 }, 00:09:24.847 { 00:09:24.847 "name": null, 00:09:24.847 "uuid": "2403ae00-a546-4838-934b-ff60e0709d99", 00:09:24.847 "is_configured": false, 00:09:24.847 "data_offset": 0, 00:09:24.847 "data_size": 63488 00:09:24.847 }, 00:09:24.847 { 00:09:24.847 "name": "BaseBdev3", 00:09:24.847 "uuid": "3774f728-285b-4c46-aa6a-70678c79afe7", 00:09:24.847 "is_configured": true, 00:09:24.847 "data_offset": 2048, 00:09:24.847 "data_size": 63488 00:09:24.847 } 00:09:24.847 ] 00:09:24.847 }' 00:09:24.847 10:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.847 10:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.412 [2024-11-15 10:37:46.478607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.412 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.412 "name": "Existed_Raid", 00:09:25.413 "uuid": "d27bc7c6-09ac-454c-8176-db6596987e38", 00:09:25.413 "strip_size_kb": 64, 00:09:25.413 "state": "configuring", 00:09:25.413 "raid_level": "concat", 00:09:25.413 "superblock": true, 00:09:25.413 "num_base_bdevs": 3, 00:09:25.413 "num_base_bdevs_discovered": 2, 00:09:25.413 "num_base_bdevs_operational": 3, 00:09:25.413 "base_bdevs_list": [ 00:09:25.413 { 00:09:25.413 "name": null, 00:09:25.413 "uuid": "ef6626e6-6eb6-4aee-acaf-db13810fce79", 00:09:25.413 "is_configured": false, 00:09:25.413 "data_offset": 0, 00:09:25.413 "data_size": 63488 00:09:25.413 }, 00:09:25.413 { 00:09:25.413 "name": "BaseBdev2", 00:09:25.413 "uuid": "2403ae00-a546-4838-934b-ff60e0709d99", 00:09:25.413 "is_configured": true, 00:09:25.413 "data_offset": 2048, 00:09:25.413 "data_size": 63488 00:09:25.413 }, 00:09:25.413 { 00:09:25.413 "name": "BaseBdev3", 00:09:25.413 "uuid": "3774f728-285b-4c46-aa6a-70678c79afe7", 00:09:25.413 "is_configured": true, 00:09:25.413 "data_offset": 2048, 00:09:25.413 "data_size": 63488 00:09:25.413 } 00:09:25.413 ] 00:09:25.413 }' 00:09:25.413 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.413 10:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.979 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.979 10:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.979 10:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.979 10:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:25.979 10:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ef6626e6-6eb6-4aee-acaf-db13810fce79 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.979 [2024-11-15 10:37:47.118095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:25.979 [2024-11-15 10:37:47.118579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:25.979 [2024-11-15 10:37:47.118613] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:25.979 NewBaseBdev 00:09:25.979 [2024-11-15 10:37:47.118938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:25.979 [2024-11-15 10:37:47.119126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:25.979 [2024-11-15 10:37:47.119151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:25.979 [2024-11-15 10:37:47.119322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.979 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.238 [ 00:09:26.238 { 00:09:26.238 "name": "NewBaseBdev", 00:09:26.238 "aliases": [ 00:09:26.238 "ef6626e6-6eb6-4aee-acaf-db13810fce79" 00:09:26.238 ], 00:09:26.238 "product_name": "Malloc disk", 00:09:26.238 "block_size": 512, 00:09:26.238 "num_blocks": 65536, 00:09:26.238 "uuid": "ef6626e6-6eb6-4aee-acaf-db13810fce79", 00:09:26.238 "assigned_rate_limits": { 00:09:26.238 "rw_ios_per_sec": 0, 00:09:26.238 "rw_mbytes_per_sec": 0, 00:09:26.238 "r_mbytes_per_sec": 0, 00:09:26.238 "w_mbytes_per_sec": 0 00:09:26.238 }, 00:09:26.238 "claimed": true, 00:09:26.238 "claim_type": "exclusive_write", 00:09:26.238 "zoned": false, 00:09:26.238 "supported_io_types": { 00:09:26.238 "read": true, 00:09:26.238 "write": true, 00:09:26.238 "unmap": true, 00:09:26.238 "flush": true, 00:09:26.238 "reset": true, 00:09:26.238 "nvme_admin": false, 00:09:26.239 "nvme_io": false, 00:09:26.239 "nvme_io_md": false, 00:09:26.239 "write_zeroes": true, 00:09:26.239 "zcopy": true, 00:09:26.239 "get_zone_info": false, 00:09:26.239 "zone_management": false, 00:09:26.239 "zone_append": false, 00:09:26.239 "compare": false, 00:09:26.239 "compare_and_write": false, 00:09:26.239 "abort": true, 00:09:26.239 "seek_hole": false, 00:09:26.239 "seek_data": false, 00:09:26.239 "copy": true, 00:09:26.239 "nvme_iov_md": false 00:09:26.239 }, 00:09:26.239 "memory_domains": [ 00:09:26.239 { 00:09:26.239 "dma_device_id": "system", 00:09:26.239 "dma_device_type": 1 00:09:26.239 }, 00:09:26.239 { 00:09:26.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.239 "dma_device_type": 2 00:09:26.239 } 00:09:26.239 ], 00:09:26.239 "driver_specific": {} 00:09:26.239 } 00:09:26.239 ] 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.239 "name": "Existed_Raid", 00:09:26.239 "uuid": "d27bc7c6-09ac-454c-8176-db6596987e38", 00:09:26.239 "strip_size_kb": 64, 00:09:26.239 "state": "online", 00:09:26.239 "raid_level": "concat", 00:09:26.239 "superblock": true, 00:09:26.239 "num_base_bdevs": 3, 00:09:26.239 "num_base_bdevs_discovered": 3, 00:09:26.239 "num_base_bdevs_operational": 3, 00:09:26.239 "base_bdevs_list": [ 00:09:26.239 { 00:09:26.239 "name": "NewBaseBdev", 00:09:26.239 "uuid": "ef6626e6-6eb6-4aee-acaf-db13810fce79", 00:09:26.239 "is_configured": true, 00:09:26.239 "data_offset": 2048, 00:09:26.239 "data_size": 63488 00:09:26.239 }, 00:09:26.239 { 00:09:26.239 "name": "BaseBdev2", 00:09:26.239 "uuid": "2403ae00-a546-4838-934b-ff60e0709d99", 00:09:26.239 "is_configured": true, 00:09:26.239 "data_offset": 2048, 00:09:26.239 "data_size": 63488 00:09:26.239 }, 00:09:26.239 { 00:09:26.239 "name": "BaseBdev3", 00:09:26.239 "uuid": "3774f728-285b-4c46-aa6a-70678c79afe7", 00:09:26.239 "is_configured": true, 00:09:26.239 "data_offset": 2048, 00:09:26.239 "data_size": 63488 00:09:26.239 } 00:09:26.239 ] 00:09:26.239 }' 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.239 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.815 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:26.815 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:26.815 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:26.815 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:26.815 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:26.815 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:26.815 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:26.815 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:26.815 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.815 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.815 [2024-11-15 10:37:47.686708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.815 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.815 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:26.815 "name": "Existed_Raid", 00:09:26.815 "aliases": [ 00:09:26.815 "d27bc7c6-09ac-454c-8176-db6596987e38" 00:09:26.815 ], 00:09:26.815 "product_name": "Raid Volume", 00:09:26.815 "block_size": 512, 00:09:26.815 "num_blocks": 190464, 00:09:26.815 "uuid": "d27bc7c6-09ac-454c-8176-db6596987e38", 00:09:26.815 "assigned_rate_limits": { 00:09:26.815 "rw_ios_per_sec": 0, 00:09:26.815 "rw_mbytes_per_sec": 0, 00:09:26.815 "r_mbytes_per_sec": 0, 00:09:26.815 "w_mbytes_per_sec": 0 00:09:26.815 }, 00:09:26.815 "claimed": false, 00:09:26.815 "zoned": false, 00:09:26.815 "supported_io_types": { 00:09:26.815 "read": true, 00:09:26.815 "write": true, 00:09:26.815 "unmap": true, 00:09:26.815 "flush": true, 00:09:26.815 "reset": true, 00:09:26.815 "nvme_admin": false, 00:09:26.815 "nvme_io": false, 00:09:26.815 "nvme_io_md": false, 00:09:26.815 "write_zeroes": true, 00:09:26.815 "zcopy": false, 00:09:26.815 "get_zone_info": false, 00:09:26.815 "zone_management": false, 00:09:26.815 "zone_append": false, 00:09:26.815 "compare": false, 00:09:26.815 "compare_and_write": false, 00:09:26.815 "abort": false, 00:09:26.815 "seek_hole": false, 00:09:26.815 "seek_data": false, 00:09:26.815 "copy": false, 00:09:26.815 "nvme_iov_md": false 00:09:26.815 }, 00:09:26.815 "memory_domains": [ 00:09:26.815 { 00:09:26.815 "dma_device_id": "system", 00:09:26.815 "dma_device_type": 1 00:09:26.815 }, 00:09:26.815 { 00:09:26.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.815 "dma_device_type": 2 00:09:26.815 }, 00:09:26.815 { 00:09:26.815 "dma_device_id": "system", 00:09:26.815 "dma_device_type": 1 00:09:26.815 }, 00:09:26.815 { 00:09:26.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.815 "dma_device_type": 2 00:09:26.815 }, 00:09:26.815 { 00:09:26.815 "dma_device_id": "system", 00:09:26.815 "dma_device_type": 1 00:09:26.815 }, 00:09:26.815 { 00:09:26.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.815 "dma_device_type": 2 00:09:26.815 } 00:09:26.815 ], 00:09:26.815 "driver_specific": { 00:09:26.815 "raid": { 00:09:26.816 "uuid": "d27bc7c6-09ac-454c-8176-db6596987e38", 00:09:26.816 "strip_size_kb": 64, 00:09:26.816 "state": "online", 00:09:26.816 "raid_level": "concat", 00:09:26.816 "superblock": true, 00:09:26.816 "num_base_bdevs": 3, 00:09:26.816 "num_base_bdevs_discovered": 3, 00:09:26.816 "num_base_bdevs_operational": 3, 00:09:26.816 "base_bdevs_list": [ 00:09:26.816 { 00:09:26.816 "name": "NewBaseBdev", 00:09:26.816 "uuid": "ef6626e6-6eb6-4aee-acaf-db13810fce79", 00:09:26.816 "is_configured": true, 00:09:26.816 "data_offset": 2048, 00:09:26.816 "data_size": 63488 00:09:26.816 }, 00:09:26.816 { 00:09:26.816 "name": "BaseBdev2", 00:09:26.816 "uuid": "2403ae00-a546-4838-934b-ff60e0709d99", 00:09:26.816 "is_configured": true, 00:09:26.816 "data_offset": 2048, 00:09:26.816 "data_size": 63488 00:09:26.816 }, 00:09:26.816 { 00:09:26.816 "name": "BaseBdev3", 00:09:26.816 "uuid": "3774f728-285b-4c46-aa6a-70678c79afe7", 00:09:26.816 "is_configured": true, 00:09:26.816 "data_offset": 2048, 00:09:26.816 "data_size": 63488 00:09:26.816 } 00:09:26.816 ] 00:09:26.816 } 00:09:26.816 } 00:09:26.816 }' 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:26.816 BaseBdev2 00:09:26.816 BaseBdev3' 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.816 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.075 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.075 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.075 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.075 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:27.075 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.075 10:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.075 10:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.075 [2024-11-15 10:37:48.046412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.075 [2024-11-15 10:37:48.046445] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.075 [2024-11-15 10:37:48.046580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.075 [2024-11-15 10:37:48.046658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.075 [2024-11-15 10:37:48.046679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66247 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66247 ']' 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66247 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66247 00:09:27.075 killing process with pid 66247 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66247' 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66247 00:09:27.075 [2024-11-15 10:37:48.084005] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.075 10:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66247 00:09:27.333 [2024-11-15 10:37:48.352159] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.268 10:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:28.268 00:09:28.268 real 0m11.711s 00:09:28.268 user 0m19.501s 00:09:28.268 sys 0m1.554s 00:09:28.268 ************************************ 00:09:28.268 END TEST raid_state_function_test_sb 00:09:28.268 ************************************ 00:09:28.268 10:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.268 10:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.526 10:37:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:28.526 10:37:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:28.526 10:37:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.526 10:37:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.526 ************************************ 00:09:28.526 START TEST raid_superblock_test 00:09:28.526 ************************************ 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66880 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66880 00:09:28.526 10:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:28.527 10:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66880 ']' 00:09:28.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.527 10:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.527 10:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.527 10:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.527 10:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.527 10:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.527 [2024-11-15 10:37:49.535274] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:09:28.527 [2024-11-15 10:37:49.535448] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66880 ] 00:09:28.785 [2024-11-15 10:37:49.709193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.785 [2024-11-15 10:37:49.840948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.042 [2024-11-15 10:37:50.045945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.042 [2024-11-15 10:37:50.046023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.609 malloc1 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.609 [2024-11-15 10:37:50.599336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:29.609 [2024-11-15 10:37:50.599432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.609 [2024-11-15 10:37:50.599469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:29.609 [2024-11-15 10:37:50.599486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.609 [2024-11-15 10:37:50.602387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.609 [2024-11-15 10:37:50.602434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:29.609 pt1 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.609 malloc2 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.609 [2024-11-15 10:37:50.655490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:29.609 [2024-11-15 10:37:50.655722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.609 [2024-11-15 10:37:50.655801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:29.609 [2024-11-15 10:37:50.655910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.609 [2024-11-15 10:37:50.658744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.609 [2024-11-15 10:37:50.658903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:29.609 pt2 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.609 malloc3 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.609 [2024-11-15 10:37:50.724207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:29.609 [2024-11-15 10:37:50.724274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.609 [2024-11-15 10:37:50.724309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:29.609 [2024-11-15 10:37:50.724325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.609 [2024-11-15 10:37:50.727083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.609 [2024-11-15 10:37:50.727131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:29.609 pt3 00:09:29.609 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.610 [2024-11-15 10:37:50.736265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:29.610 [2024-11-15 10:37:50.738841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:29.610 [2024-11-15 10:37:50.738949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:29.610 [2024-11-15 10:37:50.739157] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:29.610 [2024-11-15 10:37:50.739180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:29.610 [2024-11-15 10:37:50.739481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:29.610 [2024-11-15 10:37:50.739909] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:29.610 [2024-11-15 10:37:50.739999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:29.610 [2024-11-15 10:37:50.740377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.610 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.868 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.868 "name": "raid_bdev1", 00:09:29.868 "uuid": "dc33675a-4ab9-4698-94c1-4a839e4ee3bc", 00:09:29.868 "strip_size_kb": 64, 00:09:29.868 "state": "online", 00:09:29.868 "raid_level": "concat", 00:09:29.868 "superblock": true, 00:09:29.868 "num_base_bdevs": 3, 00:09:29.868 "num_base_bdevs_discovered": 3, 00:09:29.868 "num_base_bdevs_operational": 3, 00:09:29.868 "base_bdevs_list": [ 00:09:29.868 { 00:09:29.868 "name": "pt1", 00:09:29.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:29.868 "is_configured": true, 00:09:29.868 "data_offset": 2048, 00:09:29.868 "data_size": 63488 00:09:29.868 }, 00:09:29.868 { 00:09:29.868 "name": "pt2", 00:09:29.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.868 "is_configured": true, 00:09:29.868 "data_offset": 2048, 00:09:29.868 "data_size": 63488 00:09:29.868 }, 00:09:29.869 { 00:09:29.869 "name": "pt3", 00:09:29.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.869 "is_configured": true, 00:09:29.869 "data_offset": 2048, 00:09:29.869 "data_size": 63488 00:09:29.869 } 00:09:29.869 ] 00:09:29.869 }' 00:09:29.869 10:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.869 10:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.127 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:30.127 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:30.127 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.127 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.127 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.127 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.127 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:30.127 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.127 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.127 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.127 [2024-11-15 10:37:51.248905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.127 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.386 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.386 "name": "raid_bdev1", 00:09:30.386 "aliases": [ 00:09:30.386 "dc33675a-4ab9-4698-94c1-4a839e4ee3bc" 00:09:30.386 ], 00:09:30.386 "product_name": "Raid Volume", 00:09:30.386 "block_size": 512, 00:09:30.386 "num_blocks": 190464, 00:09:30.386 "uuid": "dc33675a-4ab9-4698-94c1-4a839e4ee3bc", 00:09:30.386 "assigned_rate_limits": { 00:09:30.386 "rw_ios_per_sec": 0, 00:09:30.386 "rw_mbytes_per_sec": 0, 00:09:30.386 "r_mbytes_per_sec": 0, 00:09:30.386 "w_mbytes_per_sec": 0 00:09:30.386 }, 00:09:30.386 "claimed": false, 00:09:30.386 "zoned": false, 00:09:30.386 "supported_io_types": { 00:09:30.386 "read": true, 00:09:30.386 "write": true, 00:09:30.386 "unmap": true, 00:09:30.386 "flush": true, 00:09:30.386 "reset": true, 00:09:30.386 "nvme_admin": false, 00:09:30.386 "nvme_io": false, 00:09:30.386 "nvme_io_md": false, 00:09:30.386 "write_zeroes": true, 00:09:30.386 "zcopy": false, 00:09:30.386 "get_zone_info": false, 00:09:30.386 "zone_management": false, 00:09:30.386 "zone_append": false, 00:09:30.386 "compare": false, 00:09:30.386 "compare_and_write": false, 00:09:30.386 "abort": false, 00:09:30.386 "seek_hole": false, 00:09:30.386 "seek_data": false, 00:09:30.386 "copy": false, 00:09:30.386 "nvme_iov_md": false 00:09:30.386 }, 00:09:30.386 "memory_domains": [ 00:09:30.386 { 00:09:30.386 "dma_device_id": "system", 00:09:30.386 "dma_device_type": 1 00:09:30.386 }, 00:09:30.386 { 00:09:30.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.386 "dma_device_type": 2 00:09:30.386 }, 00:09:30.386 { 00:09:30.386 "dma_device_id": "system", 00:09:30.386 "dma_device_type": 1 00:09:30.386 }, 00:09:30.386 { 00:09:30.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.386 "dma_device_type": 2 00:09:30.386 }, 00:09:30.386 { 00:09:30.386 "dma_device_id": "system", 00:09:30.386 "dma_device_type": 1 00:09:30.386 }, 00:09:30.386 { 00:09:30.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.386 "dma_device_type": 2 00:09:30.386 } 00:09:30.386 ], 00:09:30.386 "driver_specific": { 00:09:30.386 "raid": { 00:09:30.386 "uuid": "dc33675a-4ab9-4698-94c1-4a839e4ee3bc", 00:09:30.386 "strip_size_kb": 64, 00:09:30.386 "state": "online", 00:09:30.386 "raid_level": "concat", 00:09:30.386 "superblock": true, 00:09:30.386 "num_base_bdevs": 3, 00:09:30.386 "num_base_bdevs_discovered": 3, 00:09:30.386 "num_base_bdevs_operational": 3, 00:09:30.386 "base_bdevs_list": [ 00:09:30.386 { 00:09:30.386 "name": "pt1", 00:09:30.387 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.387 "is_configured": true, 00:09:30.387 "data_offset": 2048, 00:09:30.387 "data_size": 63488 00:09:30.387 }, 00:09:30.387 { 00:09:30.387 "name": "pt2", 00:09:30.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.387 "is_configured": true, 00:09:30.387 "data_offset": 2048, 00:09:30.387 "data_size": 63488 00:09:30.387 }, 00:09:30.387 { 00:09:30.387 "name": "pt3", 00:09:30.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.387 "is_configured": true, 00:09:30.387 "data_offset": 2048, 00:09:30.387 "data_size": 63488 00:09:30.387 } 00:09:30.387 ] 00:09:30.387 } 00:09:30.387 } 00:09:30.387 }' 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:30.387 pt2 00:09:30.387 pt3' 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.387 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:30.646 [2024-11-15 10:37:51.548892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dc33675a-4ab9-4698-94c1-4a839e4ee3bc 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dc33675a-4ab9-4698-94c1-4a839e4ee3bc ']' 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.646 [2024-11-15 10:37:51.588535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.646 [2024-11-15 10:37:51.588569] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.646 [2024-11-15 10:37:51.588671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.646 [2024-11-15 10:37:51.588764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.646 [2024-11-15 10:37:51.588779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.646 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.647 [2024-11-15 10:37:51.716641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:30.647 [2024-11-15 10:37:51.719102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:30.647 [2024-11-15 10:37:51.719171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:30.647 [2024-11-15 10:37:51.719244] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:30.647 [2024-11-15 10:37:51.719319] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:30.647 [2024-11-15 10:37:51.719354] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:30.647 [2024-11-15 10:37:51.719381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.647 [2024-11-15 10:37:51.719395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:30.647 request: 00:09:30.647 { 00:09:30.647 "name": "raid_bdev1", 00:09:30.647 "raid_level": "concat", 00:09:30.647 "base_bdevs": [ 00:09:30.647 "malloc1", 00:09:30.647 "malloc2", 00:09:30.647 "malloc3" 00:09:30.647 ], 00:09:30.647 "strip_size_kb": 64, 00:09:30.647 "superblock": false, 00:09:30.647 "method": "bdev_raid_create", 00:09:30.647 "req_id": 1 00:09:30.647 } 00:09:30.647 Got JSON-RPC error response 00:09:30.647 response: 00:09:30.647 { 00:09:30.647 "code": -17, 00:09:30.647 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:30.647 } 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.647 [2024-11-15 10:37:51.784594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:30.647 [2024-11-15 10:37:51.784669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.647 [2024-11-15 10:37:51.784703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:30.647 [2024-11-15 10:37:51.784718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.647 [2024-11-15 10:37:51.787573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.647 [2024-11-15 10:37:51.787618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:30.647 [2024-11-15 10:37:51.787720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:30.647 [2024-11-15 10:37:51.787790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:30.647 pt1 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.647 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.906 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.906 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.906 "name": "raid_bdev1", 00:09:30.906 "uuid": "dc33675a-4ab9-4698-94c1-4a839e4ee3bc", 00:09:30.906 "strip_size_kb": 64, 00:09:30.906 "state": "configuring", 00:09:30.906 "raid_level": "concat", 00:09:30.906 "superblock": true, 00:09:30.906 "num_base_bdevs": 3, 00:09:30.906 "num_base_bdevs_discovered": 1, 00:09:30.906 "num_base_bdevs_operational": 3, 00:09:30.906 "base_bdevs_list": [ 00:09:30.906 { 00:09:30.906 "name": "pt1", 00:09:30.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.906 "is_configured": true, 00:09:30.906 "data_offset": 2048, 00:09:30.906 "data_size": 63488 00:09:30.906 }, 00:09:30.906 { 00:09:30.906 "name": null, 00:09:30.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.906 "is_configured": false, 00:09:30.906 "data_offset": 2048, 00:09:30.906 "data_size": 63488 00:09:30.906 }, 00:09:30.906 { 00:09:30.906 "name": null, 00:09:30.906 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.906 "is_configured": false, 00:09:30.906 "data_offset": 2048, 00:09:30.906 "data_size": 63488 00:09:30.906 } 00:09:30.906 ] 00:09:30.906 }' 00:09:30.906 10:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.906 10:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.165 [2024-11-15 10:37:52.296781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:31.165 [2024-11-15 10:37:52.296859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.165 [2024-11-15 10:37:52.296894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:31.165 [2024-11-15 10:37:52.296911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.165 [2024-11-15 10:37:52.297500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.165 [2024-11-15 10:37:52.297558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:31.165 [2024-11-15 10:37:52.297678] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:31.165 [2024-11-15 10:37:52.297712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:31.165 pt2 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.165 [2024-11-15 10:37:52.304786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.165 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.423 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.423 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.423 "name": "raid_bdev1", 00:09:31.423 "uuid": "dc33675a-4ab9-4698-94c1-4a839e4ee3bc", 00:09:31.423 "strip_size_kb": 64, 00:09:31.423 "state": "configuring", 00:09:31.423 "raid_level": "concat", 00:09:31.423 "superblock": true, 00:09:31.423 "num_base_bdevs": 3, 00:09:31.423 "num_base_bdevs_discovered": 1, 00:09:31.423 "num_base_bdevs_operational": 3, 00:09:31.423 "base_bdevs_list": [ 00:09:31.423 { 00:09:31.423 "name": "pt1", 00:09:31.423 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.423 "is_configured": true, 00:09:31.423 "data_offset": 2048, 00:09:31.423 "data_size": 63488 00:09:31.423 }, 00:09:31.423 { 00:09:31.423 "name": null, 00:09:31.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.423 "is_configured": false, 00:09:31.423 "data_offset": 0, 00:09:31.423 "data_size": 63488 00:09:31.423 }, 00:09:31.423 { 00:09:31.423 "name": null, 00:09:31.423 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.423 "is_configured": false, 00:09:31.423 "data_offset": 2048, 00:09:31.423 "data_size": 63488 00:09:31.423 } 00:09:31.423 ] 00:09:31.423 }' 00:09:31.423 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.423 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.681 [2024-11-15 10:37:52.800894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:31.681 [2024-11-15 10:37:52.800978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.681 [2024-11-15 10:37:52.801021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:31.681 [2024-11-15 10:37:52.801039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.681 [2024-11-15 10:37:52.801873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.681 [2024-11-15 10:37:52.801914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:31.681 [2024-11-15 10:37:52.802032] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:31.681 [2024-11-15 10:37:52.802070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:31.681 pt2 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.681 [2024-11-15 10:37:52.808867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:31.681 [2024-11-15 10:37:52.808925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.681 [2024-11-15 10:37:52.808948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:31.681 [2024-11-15 10:37:52.808964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.681 [2024-11-15 10:37:52.809427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.681 [2024-11-15 10:37:52.809479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:31.681 [2024-11-15 10:37:52.809573] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:31.681 [2024-11-15 10:37:52.809609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:31.681 [2024-11-15 10:37:52.809755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:31.681 [2024-11-15 10:37:52.809776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:31.681 [2024-11-15 10:37:52.810086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:31.681 [2024-11-15 10:37:52.810277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:31.681 [2024-11-15 10:37:52.810292] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:31.681 [2024-11-15 10:37:52.810456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.681 pt3 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.681 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.939 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.939 "name": "raid_bdev1", 00:09:31.939 "uuid": "dc33675a-4ab9-4698-94c1-4a839e4ee3bc", 00:09:31.939 "strip_size_kb": 64, 00:09:31.939 "state": "online", 00:09:31.939 "raid_level": "concat", 00:09:31.939 "superblock": true, 00:09:31.939 "num_base_bdevs": 3, 00:09:31.939 "num_base_bdevs_discovered": 3, 00:09:31.939 "num_base_bdevs_operational": 3, 00:09:31.939 "base_bdevs_list": [ 00:09:31.939 { 00:09:31.939 "name": "pt1", 00:09:31.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.939 "is_configured": true, 00:09:31.939 "data_offset": 2048, 00:09:31.939 "data_size": 63488 00:09:31.939 }, 00:09:31.939 { 00:09:31.939 "name": "pt2", 00:09:31.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.939 "is_configured": true, 00:09:31.939 "data_offset": 2048, 00:09:31.939 "data_size": 63488 00:09:31.939 }, 00:09:31.939 { 00:09:31.939 "name": "pt3", 00:09:31.939 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.939 "is_configured": true, 00:09:31.939 "data_offset": 2048, 00:09:31.939 "data_size": 63488 00:09:31.939 } 00:09:31.939 ] 00:09:31.939 }' 00:09:31.939 10:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.939 10:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.197 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:32.197 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:32.197 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:32.197 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:32.197 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:32.197 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:32.198 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:32.198 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:32.198 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.198 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.198 [2024-11-15 10:37:53.285435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.198 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.198 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:32.198 "name": "raid_bdev1", 00:09:32.198 "aliases": [ 00:09:32.198 "dc33675a-4ab9-4698-94c1-4a839e4ee3bc" 00:09:32.198 ], 00:09:32.198 "product_name": "Raid Volume", 00:09:32.198 "block_size": 512, 00:09:32.198 "num_blocks": 190464, 00:09:32.198 "uuid": "dc33675a-4ab9-4698-94c1-4a839e4ee3bc", 00:09:32.198 "assigned_rate_limits": { 00:09:32.198 "rw_ios_per_sec": 0, 00:09:32.198 "rw_mbytes_per_sec": 0, 00:09:32.198 "r_mbytes_per_sec": 0, 00:09:32.198 "w_mbytes_per_sec": 0 00:09:32.198 }, 00:09:32.198 "claimed": false, 00:09:32.198 "zoned": false, 00:09:32.198 "supported_io_types": { 00:09:32.198 "read": true, 00:09:32.198 "write": true, 00:09:32.198 "unmap": true, 00:09:32.198 "flush": true, 00:09:32.198 "reset": true, 00:09:32.198 "nvme_admin": false, 00:09:32.198 "nvme_io": false, 00:09:32.198 "nvme_io_md": false, 00:09:32.198 "write_zeroes": true, 00:09:32.198 "zcopy": false, 00:09:32.198 "get_zone_info": false, 00:09:32.198 "zone_management": false, 00:09:32.198 "zone_append": false, 00:09:32.198 "compare": false, 00:09:32.198 "compare_and_write": false, 00:09:32.198 "abort": false, 00:09:32.198 "seek_hole": false, 00:09:32.198 "seek_data": false, 00:09:32.198 "copy": false, 00:09:32.198 "nvme_iov_md": false 00:09:32.198 }, 00:09:32.198 "memory_domains": [ 00:09:32.198 { 00:09:32.198 "dma_device_id": "system", 00:09:32.198 "dma_device_type": 1 00:09:32.198 }, 00:09:32.198 { 00:09:32.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.198 "dma_device_type": 2 00:09:32.198 }, 00:09:32.198 { 00:09:32.198 "dma_device_id": "system", 00:09:32.198 "dma_device_type": 1 00:09:32.198 }, 00:09:32.198 { 00:09:32.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.198 "dma_device_type": 2 00:09:32.198 }, 00:09:32.198 { 00:09:32.198 "dma_device_id": "system", 00:09:32.198 "dma_device_type": 1 00:09:32.198 }, 00:09:32.198 { 00:09:32.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.198 "dma_device_type": 2 00:09:32.198 } 00:09:32.198 ], 00:09:32.198 "driver_specific": { 00:09:32.198 "raid": { 00:09:32.198 "uuid": "dc33675a-4ab9-4698-94c1-4a839e4ee3bc", 00:09:32.198 "strip_size_kb": 64, 00:09:32.198 "state": "online", 00:09:32.198 "raid_level": "concat", 00:09:32.198 "superblock": true, 00:09:32.198 "num_base_bdevs": 3, 00:09:32.198 "num_base_bdevs_discovered": 3, 00:09:32.198 "num_base_bdevs_operational": 3, 00:09:32.198 "base_bdevs_list": [ 00:09:32.198 { 00:09:32.198 "name": "pt1", 00:09:32.198 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.198 "is_configured": true, 00:09:32.198 "data_offset": 2048, 00:09:32.198 "data_size": 63488 00:09:32.198 }, 00:09:32.198 { 00:09:32.198 "name": "pt2", 00:09:32.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.198 "is_configured": true, 00:09:32.198 "data_offset": 2048, 00:09:32.198 "data_size": 63488 00:09:32.198 }, 00:09:32.198 { 00:09:32.198 "name": "pt3", 00:09:32.198 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.198 "is_configured": true, 00:09:32.198 "data_offset": 2048, 00:09:32.198 "data_size": 63488 00:09:32.198 } 00:09:32.198 ] 00:09:32.198 } 00:09:32.198 } 00:09:32.198 }' 00:09:32.198 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:32.456 pt2 00:09:32.456 pt3' 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:32.456 [2024-11-15 10:37:53.593372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.456 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dc33675a-4ab9-4698-94c1-4a839e4ee3bc '!=' dc33675a-4ab9-4698-94c1-4a839e4ee3bc ']' 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66880 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66880 ']' 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66880 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66880 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.714 killing process with pid 66880 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66880' 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66880 00:09:32.714 [2024-11-15 10:37:53.674266] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.714 10:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66880 00:09:32.714 [2024-11-15 10:37:53.674389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.714 [2024-11-15 10:37:53.674476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.714 [2024-11-15 10:37:53.674518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:32.971 [2024-11-15 10:37:53.939504] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.905 10:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:33.905 00:09:33.905 real 0m5.513s 00:09:33.905 user 0m8.303s 00:09:33.905 sys 0m0.773s 00:09:33.905 10:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.905 ************************************ 00:09:33.905 END TEST raid_superblock_test 00:09:33.905 ************************************ 00:09:33.905 10:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.905 10:37:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:33.905 10:37:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:33.905 10:37:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.905 10:37:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.905 ************************************ 00:09:33.905 START TEST raid_read_error_test 00:09:33.905 ************************************ 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZtkhNc29op 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67133 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67133 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67133 ']' 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.905 10:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.164 [2024-11-15 10:37:55.120703] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:09:34.164 [2024-11-15 10:37:55.120856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67133 ] 00:09:34.164 [2024-11-15 10:37:55.295050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.421 [2024-11-15 10:37:55.424232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.679 [2024-11-15 10:37:55.625987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.679 [2024-11-15 10:37:55.626082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.253 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.253 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:35.253 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:35.253 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:35.253 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.253 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.253 BaseBdev1_malloc 00:09:35.253 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.253 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:35.253 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.253 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.253 true 00:09:35.253 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.253 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:35.253 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.253 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.253 [2024-11-15 10:37:56.184123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:35.253 [2024-11-15 10:37:56.184191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.253 [2024-11-15 10:37:56.184222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:35.253 [2024-11-15 10:37:56.184240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.254 [2024-11-15 10:37:56.187086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.254 [2024-11-15 10:37:56.187139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:35.254 BaseBdev1 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.254 BaseBdev2_malloc 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.254 true 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.254 [2024-11-15 10:37:56.240518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:35.254 [2024-11-15 10:37:56.240582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.254 [2024-11-15 10:37:56.240607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:35.254 [2024-11-15 10:37:56.240625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.254 [2024-11-15 10:37:56.243347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.254 [2024-11-15 10:37:56.243399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:35.254 BaseBdev2 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.254 BaseBdev3_malloc 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.254 true 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.254 [2024-11-15 10:37:56.307172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:35.254 [2024-11-15 10:37:56.307240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.254 [2024-11-15 10:37:56.307267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:35.254 [2024-11-15 10:37:56.307286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.254 [2024-11-15 10:37:56.310073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.254 [2024-11-15 10:37:56.310124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:35.254 BaseBdev3 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.254 [2024-11-15 10:37:56.315264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.254 [2024-11-15 10:37:56.317730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.254 [2024-11-15 10:37:56.317849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.254 [2024-11-15 10:37:56.318119] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:35.254 [2024-11-15 10:37:56.318150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:35.254 [2024-11-15 10:37:56.318469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:35.254 [2024-11-15 10:37:56.318707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:35.254 [2024-11-15 10:37:56.318741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:35.254 [2024-11-15 10:37:56.318927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.254 "name": "raid_bdev1", 00:09:35.254 "uuid": "d27fcfcc-b2b3-427c-bc60-2aa01efed162", 00:09:35.254 "strip_size_kb": 64, 00:09:35.254 "state": "online", 00:09:35.254 "raid_level": "concat", 00:09:35.254 "superblock": true, 00:09:35.254 "num_base_bdevs": 3, 00:09:35.254 "num_base_bdevs_discovered": 3, 00:09:35.254 "num_base_bdevs_operational": 3, 00:09:35.254 "base_bdevs_list": [ 00:09:35.254 { 00:09:35.254 "name": "BaseBdev1", 00:09:35.254 "uuid": "ea76b3d6-46ee-5b01-8393-eb4bec4f1459", 00:09:35.254 "is_configured": true, 00:09:35.254 "data_offset": 2048, 00:09:35.254 "data_size": 63488 00:09:35.254 }, 00:09:35.254 { 00:09:35.254 "name": "BaseBdev2", 00:09:35.254 "uuid": "53168cfe-7853-522f-b085-3ce5c01df58d", 00:09:35.254 "is_configured": true, 00:09:35.254 "data_offset": 2048, 00:09:35.254 "data_size": 63488 00:09:35.254 }, 00:09:35.254 { 00:09:35.254 "name": "BaseBdev3", 00:09:35.254 "uuid": "124fdd0a-084e-5f6a-9182-c8a328ef48b6", 00:09:35.254 "is_configured": true, 00:09:35.254 "data_offset": 2048, 00:09:35.254 "data_size": 63488 00:09:35.254 } 00:09:35.254 ] 00:09:35.254 }' 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.254 10:37:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.819 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:35.819 10:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:35.819 [2024-11-15 10:37:56.920841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.751 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.751 "name": "raid_bdev1", 00:09:36.751 "uuid": "d27fcfcc-b2b3-427c-bc60-2aa01efed162", 00:09:36.751 "strip_size_kb": 64, 00:09:36.751 "state": "online", 00:09:36.751 "raid_level": "concat", 00:09:36.751 "superblock": true, 00:09:36.751 "num_base_bdevs": 3, 00:09:36.751 "num_base_bdevs_discovered": 3, 00:09:36.751 "num_base_bdevs_operational": 3, 00:09:36.751 "base_bdevs_list": [ 00:09:36.751 { 00:09:36.751 "name": "BaseBdev1", 00:09:36.751 "uuid": "ea76b3d6-46ee-5b01-8393-eb4bec4f1459", 00:09:36.751 "is_configured": true, 00:09:36.751 "data_offset": 2048, 00:09:36.751 "data_size": 63488 00:09:36.751 }, 00:09:36.751 { 00:09:36.751 "name": "BaseBdev2", 00:09:36.751 "uuid": "53168cfe-7853-522f-b085-3ce5c01df58d", 00:09:36.751 "is_configured": true, 00:09:36.751 "data_offset": 2048, 00:09:36.751 "data_size": 63488 00:09:36.751 }, 00:09:36.751 { 00:09:36.751 "name": "BaseBdev3", 00:09:36.751 "uuid": "124fdd0a-084e-5f6a-9182-c8a328ef48b6", 00:09:36.751 "is_configured": true, 00:09:36.751 "data_offset": 2048, 00:09:36.751 "data_size": 63488 00:09:36.751 } 00:09:36.751 ] 00:09:36.751 }' 00:09:36.752 10:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.752 10:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.317 10:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:37.317 10:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.317 10:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.317 [2024-11-15 10:37:58.360360] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:37.317 [2024-11-15 10:37:58.360400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.317 [2024-11-15 10:37:58.363799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.317 [2024-11-15 10:37:58.363863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.317 [2024-11-15 10:37:58.363918] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.317 [2024-11-15 10:37:58.363937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:37.317 { 00:09:37.317 "results": [ 00:09:37.317 { 00:09:37.317 "job": "raid_bdev1", 00:09:37.317 "core_mask": "0x1", 00:09:37.317 "workload": "randrw", 00:09:37.317 "percentage": 50, 00:09:37.317 "status": "finished", 00:09:37.317 "queue_depth": 1, 00:09:37.317 "io_size": 131072, 00:09:37.317 "runtime": 1.43706, 00:09:37.317 "iops": 10513.13097574214, 00:09:37.317 "mibps": 1314.1413719677676, 00:09:37.317 "io_failed": 1, 00:09:37.317 "io_timeout": 0, 00:09:37.317 "avg_latency_us": 132.71707483197855, 00:09:37.317 "min_latency_us": 40.49454545454545, 00:09:37.317 "max_latency_us": 1884.16 00:09:37.317 } 00:09:37.317 ], 00:09:37.317 "core_count": 1 00:09:37.317 } 00:09:37.317 10:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.317 10:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67133 00:09:37.317 10:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67133 ']' 00:09:37.318 10:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67133 00:09:37.318 10:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:37.318 10:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.318 10:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67133 00:09:37.318 10:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.318 10:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.318 killing process with pid 67133 00:09:37.318 10:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67133' 00:09:37.318 10:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67133 00:09:37.318 [2024-11-15 10:37:58.398726] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.318 10:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67133 00:09:37.576 [2024-11-15 10:37:58.605048] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.948 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZtkhNc29op 00:09:38.948 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:38.948 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:38.949 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:38.949 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:38.949 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:38.949 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:38.949 10:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:38.949 00:09:38.949 real 0m4.688s 00:09:38.949 user 0m5.816s 00:09:38.949 sys 0m0.556s 00:09:38.949 10:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.949 10:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.949 ************************************ 00:09:38.949 END TEST raid_read_error_test 00:09:38.949 ************************************ 00:09:38.949 10:37:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:38.949 10:37:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:38.949 10:37:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.949 10:37:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.949 ************************************ 00:09:38.949 START TEST raid_write_error_test 00:09:38.949 ************************************ 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VrPsp14ADi 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67284 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67284 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67284 ']' 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.949 10:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.949 [2024-11-15 10:37:59.864666] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:09:38.949 [2024-11-15 10:37:59.864886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67284 ] 00:09:38.949 [2024-11-15 10:38:00.043279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.207 [2024-11-15 10:38:00.174230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.465 [2024-11-15 10:38:00.380614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.465 [2024-11-15 10:38:00.380655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.031 BaseBdev1_malloc 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.031 true 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.031 [2024-11-15 10:38:00.941966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:40.031 [2024-11-15 10:38:00.942180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.031 [2024-11-15 10:38:00.942223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:40.031 [2024-11-15 10:38:00.942243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.031 [2024-11-15 10:38:00.945174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.031 [2024-11-15 10:38:00.945376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:40.031 BaseBdev1 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.031 BaseBdev2_malloc 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.031 true 00:09:40.031 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.032 10:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:40.032 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.032 10:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.032 [2024-11-15 10:38:00.998517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:40.032 [2024-11-15 10:38:00.998588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.032 [2024-11-15 10:38:00.998616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:40.032 [2024-11-15 10:38:00.998634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.032 [2024-11-15 10:38:01.001403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.032 [2024-11-15 10:38:01.001454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:40.032 BaseBdev2 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.032 BaseBdev3_malloc 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.032 true 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.032 [2024-11-15 10:38:01.069140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:40.032 [2024-11-15 10:38:01.069226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.032 [2024-11-15 10:38:01.069256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:40.032 [2024-11-15 10:38:01.069275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.032 [2024-11-15 10:38:01.072196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.032 [2024-11-15 10:38:01.072249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:40.032 BaseBdev3 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.032 [2024-11-15 10:38:01.077234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.032 [2024-11-15 10:38:01.079818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.032 [2024-11-15 10:38:01.079933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.032 [2024-11-15 10:38:01.080221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:40.032 [2024-11-15 10:38:01.080241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:40.032 [2024-11-15 10:38:01.080754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:40.032 [2024-11-15 10:38:01.081033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:40.032 [2024-11-15 10:38:01.081098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:40.032 [2024-11-15 10:38:01.081570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.032 "name": "raid_bdev1", 00:09:40.032 "uuid": "46686bbf-0584-4289-882a-37c4f84c7d70", 00:09:40.032 "strip_size_kb": 64, 00:09:40.032 "state": "online", 00:09:40.032 "raid_level": "concat", 00:09:40.032 "superblock": true, 00:09:40.032 "num_base_bdevs": 3, 00:09:40.032 "num_base_bdevs_discovered": 3, 00:09:40.032 "num_base_bdevs_operational": 3, 00:09:40.032 "base_bdevs_list": [ 00:09:40.032 { 00:09:40.032 "name": "BaseBdev1", 00:09:40.032 "uuid": "29ba2f53-1661-57f2-b2fe-6e2362d45999", 00:09:40.032 "is_configured": true, 00:09:40.032 "data_offset": 2048, 00:09:40.032 "data_size": 63488 00:09:40.032 }, 00:09:40.032 { 00:09:40.032 "name": "BaseBdev2", 00:09:40.032 "uuid": "90673323-d454-50fb-8bab-62e9530ddf04", 00:09:40.032 "is_configured": true, 00:09:40.032 "data_offset": 2048, 00:09:40.032 "data_size": 63488 00:09:40.032 }, 00:09:40.032 { 00:09:40.032 "name": "BaseBdev3", 00:09:40.032 "uuid": "8223dc1a-c6c8-5827-93b0-606c1f8869e4", 00:09:40.032 "is_configured": true, 00:09:40.032 "data_offset": 2048, 00:09:40.032 "data_size": 63488 00:09:40.032 } 00:09:40.032 ] 00:09:40.032 }' 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.032 10:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.598 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:40.598 10:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:40.598 [2024-11-15 10:38:01.703106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.534 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.534 "name": "raid_bdev1", 00:09:41.534 "uuid": "46686bbf-0584-4289-882a-37c4f84c7d70", 00:09:41.534 "strip_size_kb": 64, 00:09:41.534 "state": "online", 00:09:41.534 "raid_level": "concat", 00:09:41.534 "superblock": true, 00:09:41.534 "num_base_bdevs": 3, 00:09:41.534 "num_base_bdevs_discovered": 3, 00:09:41.534 "num_base_bdevs_operational": 3, 00:09:41.535 "base_bdevs_list": [ 00:09:41.535 { 00:09:41.535 "name": "BaseBdev1", 00:09:41.535 "uuid": "29ba2f53-1661-57f2-b2fe-6e2362d45999", 00:09:41.535 "is_configured": true, 00:09:41.535 "data_offset": 2048, 00:09:41.535 "data_size": 63488 00:09:41.535 }, 00:09:41.535 { 00:09:41.535 "name": "BaseBdev2", 00:09:41.535 "uuid": "90673323-d454-50fb-8bab-62e9530ddf04", 00:09:41.535 "is_configured": true, 00:09:41.535 "data_offset": 2048, 00:09:41.535 "data_size": 63488 00:09:41.535 }, 00:09:41.535 { 00:09:41.535 "name": "BaseBdev3", 00:09:41.535 "uuid": "8223dc1a-c6c8-5827-93b0-606c1f8869e4", 00:09:41.535 "is_configured": true, 00:09:41.535 "data_offset": 2048, 00:09:41.535 "data_size": 63488 00:09:41.535 } 00:09:41.535 ] 00:09:41.535 }' 00:09:41.535 10:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.535 10:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.102 [2024-11-15 10:38:03.130310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.102 [2024-11-15 10:38:03.130346] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.102 [2024-11-15 10:38:03.133837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.102 [2024-11-15 10:38:03.133897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.102 [2024-11-15 10:38:03.133952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.102 [2024-11-15 10:38:03.133970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:42.102 { 00:09:42.102 "results": [ 00:09:42.102 { 00:09:42.102 "job": "raid_bdev1", 00:09:42.102 "core_mask": "0x1", 00:09:42.102 "workload": "randrw", 00:09:42.102 "percentage": 50, 00:09:42.102 "status": "finished", 00:09:42.102 "queue_depth": 1, 00:09:42.102 "io_size": 131072, 00:09:42.102 "runtime": 1.424739, 00:09:42.102 "iops": 10608.960658759253, 00:09:42.102 "mibps": 1326.1200823449067, 00:09:42.102 "io_failed": 1, 00:09:42.102 "io_timeout": 0, 00:09:42.102 "avg_latency_us": 131.42499073829055, 00:09:42.102 "min_latency_us": 40.49454545454545, 00:09:42.102 "max_latency_us": 1817.1345454545456 00:09:42.102 } 00:09:42.102 ], 00:09:42.102 "core_count": 1 00:09:42.102 } 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67284 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67284 ']' 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67284 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67284 00:09:42.102 killing process with pid 67284 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67284' 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67284 00:09:42.102 [2024-11-15 10:38:03.171285] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.102 10:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67284 00:09:42.361 [2024-11-15 10:38:03.370964] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.296 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VrPsp14ADi 00:09:43.296 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:43.296 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:43.296 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:43.296 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:43.296 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:43.296 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:43.296 ************************************ 00:09:43.296 END TEST raid_write_error_test 00:09:43.296 ************************************ 00:09:43.296 10:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:43.296 00:09:43.296 real 0m4.672s 00:09:43.296 user 0m5.860s 00:09:43.296 sys 0m0.558s 00:09:43.296 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.296 10:38:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.554 10:38:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:43.554 10:38:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:43.554 10:38:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:43.554 10:38:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.554 10:38:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.554 ************************************ 00:09:43.554 START TEST raid_state_function_test 00:09:43.554 ************************************ 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:43.554 Process raid pid: 67428 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67428 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67428' 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67428 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67428 ']' 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.554 10:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.554 [2024-11-15 10:38:04.587291] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:09:43.554 [2024-11-15 10:38:04.587744] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.813 [2024-11-15 10:38:04.774596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.813 [2024-11-15 10:38:04.906383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.070 [2024-11-15 10:38:05.113378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.070 [2024-11-15 10:38:05.113442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.636 10:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.636 10:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:44.636 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:44.636 10:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.636 10:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.636 [2024-11-15 10:38:05.615239] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.636 [2024-11-15 10:38:05.615483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.636 [2024-11-15 10:38:05.615671] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.636 [2024-11-15 10:38:05.615710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.636 [2024-11-15 10:38:05.615723] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.636 [2024-11-15 10:38:05.615739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.636 10:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.636 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.637 "name": "Existed_Raid", 00:09:44.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.637 "strip_size_kb": 0, 00:09:44.637 "state": "configuring", 00:09:44.637 "raid_level": "raid1", 00:09:44.637 "superblock": false, 00:09:44.637 "num_base_bdevs": 3, 00:09:44.637 "num_base_bdevs_discovered": 0, 00:09:44.637 "num_base_bdevs_operational": 3, 00:09:44.637 "base_bdevs_list": [ 00:09:44.637 { 00:09:44.637 "name": "BaseBdev1", 00:09:44.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.637 "is_configured": false, 00:09:44.637 "data_offset": 0, 00:09:44.637 "data_size": 0 00:09:44.637 }, 00:09:44.637 { 00:09:44.637 "name": "BaseBdev2", 00:09:44.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.637 "is_configured": false, 00:09:44.637 "data_offset": 0, 00:09:44.637 "data_size": 0 00:09:44.637 }, 00:09:44.637 { 00:09:44.637 "name": "BaseBdev3", 00:09:44.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.637 "is_configured": false, 00:09:44.637 "data_offset": 0, 00:09:44.637 "data_size": 0 00:09:44.637 } 00:09:44.637 ] 00:09:44.637 }' 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.637 10:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.214 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.214 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.214 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.214 [2024-11-15 10:38:06.127341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.214 [2024-11-15 10:38:06.127553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:45.214 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.214 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:45.214 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.214 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.214 [2024-11-15 10:38:06.139325] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.214 [2024-11-15 10:38:06.139544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.214 [2024-11-15 10:38:06.139681] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.215 [2024-11-15 10:38:06.139746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.215 [2024-11-15 10:38:06.139868] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.215 [2024-11-15 10:38:06.140012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.215 BaseBdev1 00:09:45.215 [2024-11-15 10:38:06.183725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.215 [ 00:09:45.215 { 00:09:45.215 "name": "BaseBdev1", 00:09:45.215 "aliases": [ 00:09:45.215 "81586b6b-26ec-4951-b663-bdd0abe9c178" 00:09:45.215 ], 00:09:45.215 "product_name": "Malloc disk", 00:09:45.215 "block_size": 512, 00:09:45.215 "num_blocks": 65536, 00:09:45.215 "uuid": "81586b6b-26ec-4951-b663-bdd0abe9c178", 00:09:45.215 "assigned_rate_limits": { 00:09:45.215 "rw_ios_per_sec": 0, 00:09:45.215 "rw_mbytes_per_sec": 0, 00:09:45.215 "r_mbytes_per_sec": 0, 00:09:45.215 "w_mbytes_per_sec": 0 00:09:45.215 }, 00:09:45.215 "claimed": true, 00:09:45.215 "claim_type": "exclusive_write", 00:09:45.215 "zoned": false, 00:09:45.215 "supported_io_types": { 00:09:45.215 "read": true, 00:09:45.215 "write": true, 00:09:45.215 "unmap": true, 00:09:45.215 "flush": true, 00:09:45.215 "reset": true, 00:09:45.215 "nvme_admin": false, 00:09:45.215 "nvme_io": false, 00:09:45.215 "nvme_io_md": false, 00:09:45.215 "write_zeroes": true, 00:09:45.215 "zcopy": true, 00:09:45.215 "get_zone_info": false, 00:09:45.215 "zone_management": false, 00:09:45.215 "zone_append": false, 00:09:45.215 "compare": false, 00:09:45.215 "compare_and_write": false, 00:09:45.215 "abort": true, 00:09:45.215 "seek_hole": false, 00:09:45.215 "seek_data": false, 00:09:45.215 "copy": true, 00:09:45.215 "nvme_iov_md": false 00:09:45.215 }, 00:09:45.215 "memory_domains": [ 00:09:45.215 { 00:09:45.215 "dma_device_id": "system", 00:09:45.215 "dma_device_type": 1 00:09:45.215 }, 00:09:45.215 { 00:09:45.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.215 "dma_device_type": 2 00:09:45.215 } 00:09:45.215 ], 00:09:45.215 "driver_specific": {} 00:09:45.215 } 00:09:45.215 ] 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.215 "name": "Existed_Raid", 00:09:45.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.215 "strip_size_kb": 0, 00:09:45.215 "state": "configuring", 00:09:45.215 "raid_level": "raid1", 00:09:45.215 "superblock": false, 00:09:45.215 "num_base_bdevs": 3, 00:09:45.215 "num_base_bdevs_discovered": 1, 00:09:45.215 "num_base_bdevs_operational": 3, 00:09:45.215 "base_bdevs_list": [ 00:09:45.215 { 00:09:45.215 "name": "BaseBdev1", 00:09:45.215 "uuid": "81586b6b-26ec-4951-b663-bdd0abe9c178", 00:09:45.215 "is_configured": true, 00:09:45.215 "data_offset": 0, 00:09:45.215 "data_size": 65536 00:09:45.215 }, 00:09:45.215 { 00:09:45.215 "name": "BaseBdev2", 00:09:45.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.215 "is_configured": false, 00:09:45.215 "data_offset": 0, 00:09:45.215 "data_size": 0 00:09:45.215 }, 00:09:45.215 { 00:09:45.215 "name": "BaseBdev3", 00:09:45.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.215 "is_configured": false, 00:09:45.215 "data_offset": 0, 00:09:45.215 "data_size": 0 00:09:45.215 } 00:09:45.215 ] 00:09:45.215 }' 00:09:45.215 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.216 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.793 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.793 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.793 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.794 [2024-11-15 10:38:06.747920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.794 [2024-11-15 10:38:06.747984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.794 [2024-11-15 10:38:06.755949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.794 [2024-11-15 10:38:06.758544] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.794 [2024-11-15 10:38:06.758600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.794 [2024-11-15 10:38:06.758617] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.794 [2024-11-15 10:38:06.758634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.794 "name": "Existed_Raid", 00:09:45.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.794 "strip_size_kb": 0, 00:09:45.794 "state": "configuring", 00:09:45.794 "raid_level": "raid1", 00:09:45.794 "superblock": false, 00:09:45.794 "num_base_bdevs": 3, 00:09:45.794 "num_base_bdevs_discovered": 1, 00:09:45.794 "num_base_bdevs_operational": 3, 00:09:45.794 "base_bdevs_list": [ 00:09:45.794 { 00:09:45.794 "name": "BaseBdev1", 00:09:45.794 "uuid": "81586b6b-26ec-4951-b663-bdd0abe9c178", 00:09:45.794 "is_configured": true, 00:09:45.794 "data_offset": 0, 00:09:45.794 "data_size": 65536 00:09:45.794 }, 00:09:45.794 { 00:09:45.794 "name": "BaseBdev2", 00:09:45.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.794 "is_configured": false, 00:09:45.794 "data_offset": 0, 00:09:45.794 "data_size": 0 00:09:45.794 }, 00:09:45.794 { 00:09:45.794 "name": "BaseBdev3", 00:09:45.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.794 "is_configured": false, 00:09:45.794 "data_offset": 0, 00:09:45.794 "data_size": 0 00:09:45.794 } 00:09:45.794 ] 00:09:45.794 }' 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.794 10:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.362 [2024-11-15 10:38:07.310895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.362 BaseBdev2 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.362 [ 00:09:46.362 { 00:09:46.362 "name": "BaseBdev2", 00:09:46.362 "aliases": [ 00:09:46.362 "279c5e1e-e5ac-49e4-b6b0-63e733c12afe" 00:09:46.362 ], 00:09:46.362 "product_name": "Malloc disk", 00:09:46.362 "block_size": 512, 00:09:46.362 "num_blocks": 65536, 00:09:46.362 "uuid": "279c5e1e-e5ac-49e4-b6b0-63e733c12afe", 00:09:46.362 "assigned_rate_limits": { 00:09:46.362 "rw_ios_per_sec": 0, 00:09:46.362 "rw_mbytes_per_sec": 0, 00:09:46.362 "r_mbytes_per_sec": 0, 00:09:46.362 "w_mbytes_per_sec": 0 00:09:46.362 }, 00:09:46.362 "claimed": true, 00:09:46.362 "claim_type": "exclusive_write", 00:09:46.362 "zoned": false, 00:09:46.362 "supported_io_types": { 00:09:46.362 "read": true, 00:09:46.362 "write": true, 00:09:46.362 "unmap": true, 00:09:46.362 "flush": true, 00:09:46.362 "reset": true, 00:09:46.362 "nvme_admin": false, 00:09:46.362 "nvme_io": false, 00:09:46.362 "nvme_io_md": false, 00:09:46.362 "write_zeroes": true, 00:09:46.362 "zcopy": true, 00:09:46.362 "get_zone_info": false, 00:09:46.362 "zone_management": false, 00:09:46.362 "zone_append": false, 00:09:46.362 "compare": false, 00:09:46.362 "compare_and_write": false, 00:09:46.362 "abort": true, 00:09:46.362 "seek_hole": false, 00:09:46.362 "seek_data": false, 00:09:46.362 "copy": true, 00:09:46.362 "nvme_iov_md": false 00:09:46.362 }, 00:09:46.362 "memory_domains": [ 00:09:46.362 { 00:09:46.362 "dma_device_id": "system", 00:09:46.362 "dma_device_type": 1 00:09:46.362 }, 00:09:46.362 { 00:09:46.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.362 "dma_device_type": 2 00:09:46.362 } 00:09:46.362 ], 00:09:46.362 "driver_specific": {} 00:09:46.362 } 00:09:46.362 ] 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.362 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.363 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.363 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.363 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.363 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.363 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.363 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.363 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.363 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.363 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.363 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.363 "name": "Existed_Raid", 00:09:46.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.363 "strip_size_kb": 0, 00:09:46.363 "state": "configuring", 00:09:46.363 "raid_level": "raid1", 00:09:46.363 "superblock": false, 00:09:46.363 "num_base_bdevs": 3, 00:09:46.363 "num_base_bdevs_discovered": 2, 00:09:46.363 "num_base_bdevs_operational": 3, 00:09:46.363 "base_bdevs_list": [ 00:09:46.363 { 00:09:46.363 "name": "BaseBdev1", 00:09:46.363 "uuid": "81586b6b-26ec-4951-b663-bdd0abe9c178", 00:09:46.363 "is_configured": true, 00:09:46.363 "data_offset": 0, 00:09:46.363 "data_size": 65536 00:09:46.363 }, 00:09:46.363 { 00:09:46.363 "name": "BaseBdev2", 00:09:46.363 "uuid": "279c5e1e-e5ac-49e4-b6b0-63e733c12afe", 00:09:46.363 "is_configured": true, 00:09:46.363 "data_offset": 0, 00:09:46.363 "data_size": 65536 00:09:46.363 }, 00:09:46.363 { 00:09:46.363 "name": "BaseBdev3", 00:09:46.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.363 "is_configured": false, 00:09:46.363 "data_offset": 0, 00:09:46.363 "data_size": 0 00:09:46.363 } 00:09:46.363 ] 00:09:46.363 }' 00:09:46.363 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.363 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.931 [2024-11-15 10:38:07.925023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.931 [2024-11-15 10:38:07.925082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:46.931 [2024-11-15 10:38:07.925102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:46.931 [2024-11-15 10:38:07.925452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:46.931 [2024-11-15 10:38:07.925724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:46.931 [2024-11-15 10:38:07.925759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:46.931 BaseBdev3 00:09:46.931 [2024-11-15 10:38:07.926081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.931 [ 00:09:46.931 { 00:09:46.931 "name": "BaseBdev3", 00:09:46.931 "aliases": [ 00:09:46.931 "5ee2745d-eef0-4abc-86e5-c15bca85750b" 00:09:46.931 ], 00:09:46.931 "product_name": "Malloc disk", 00:09:46.931 "block_size": 512, 00:09:46.931 "num_blocks": 65536, 00:09:46.931 "uuid": "5ee2745d-eef0-4abc-86e5-c15bca85750b", 00:09:46.931 "assigned_rate_limits": { 00:09:46.931 "rw_ios_per_sec": 0, 00:09:46.931 "rw_mbytes_per_sec": 0, 00:09:46.931 "r_mbytes_per_sec": 0, 00:09:46.931 "w_mbytes_per_sec": 0 00:09:46.931 }, 00:09:46.931 "claimed": true, 00:09:46.931 "claim_type": "exclusive_write", 00:09:46.931 "zoned": false, 00:09:46.931 "supported_io_types": { 00:09:46.931 "read": true, 00:09:46.931 "write": true, 00:09:46.931 "unmap": true, 00:09:46.931 "flush": true, 00:09:46.931 "reset": true, 00:09:46.931 "nvme_admin": false, 00:09:46.931 "nvme_io": false, 00:09:46.931 "nvme_io_md": false, 00:09:46.931 "write_zeroes": true, 00:09:46.931 "zcopy": true, 00:09:46.931 "get_zone_info": false, 00:09:46.931 "zone_management": false, 00:09:46.931 "zone_append": false, 00:09:46.931 "compare": false, 00:09:46.931 "compare_and_write": false, 00:09:46.931 "abort": true, 00:09:46.931 "seek_hole": false, 00:09:46.931 "seek_data": false, 00:09:46.931 "copy": true, 00:09:46.931 "nvme_iov_md": false 00:09:46.931 }, 00:09:46.931 "memory_domains": [ 00:09:46.931 { 00:09:46.931 "dma_device_id": "system", 00:09:46.931 "dma_device_type": 1 00:09:46.931 }, 00:09:46.931 { 00:09:46.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.931 "dma_device_type": 2 00:09:46.931 } 00:09:46.931 ], 00:09:46.931 "driver_specific": {} 00:09:46.931 } 00:09:46.931 ] 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:46.931 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.932 10:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.932 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.932 "name": "Existed_Raid", 00:09:46.932 "uuid": "c2f11ef8-0588-415a-b81c-4e2c2879de1c", 00:09:46.932 "strip_size_kb": 0, 00:09:46.932 "state": "online", 00:09:46.932 "raid_level": "raid1", 00:09:46.932 "superblock": false, 00:09:46.932 "num_base_bdevs": 3, 00:09:46.932 "num_base_bdevs_discovered": 3, 00:09:46.932 "num_base_bdevs_operational": 3, 00:09:46.932 "base_bdevs_list": [ 00:09:46.932 { 00:09:46.932 "name": "BaseBdev1", 00:09:46.932 "uuid": "81586b6b-26ec-4951-b663-bdd0abe9c178", 00:09:46.932 "is_configured": true, 00:09:46.932 "data_offset": 0, 00:09:46.932 "data_size": 65536 00:09:46.932 }, 00:09:46.932 { 00:09:46.932 "name": "BaseBdev2", 00:09:46.932 "uuid": "279c5e1e-e5ac-49e4-b6b0-63e733c12afe", 00:09:46.932 "is_configured": true, 00:09:46.932 "data_offset": 0, 00:09:46.932 "data_size": 65536 00:09:46.932 }, 00:09:46.932 { 00:09:46.932 "name": "BaseBdev3", 00:09:46.932 "uuid": "5ee2745d-eef0-4abc-86e5-c15bca85750b", 00:09:46.932 "is_configured": true, 00:09:46.932 "data_offset": 0, 00:09:46.932 "data_size": 65536 00:09:46.932 } 00:09:46.932 ] 00:09:46.932 }' 00:09:46.932 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.932 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.499 [2024-11-15 10:38:08.473680] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.499 "name": "Existed_Raid", 00:09:47.499 "aliases": [ 00:09:47.499 "c2f11ef8-0588-415a-b81c-4e2c2879de1c" 00:09:47.499 ], 00:09:47.499 "product_name": "Raid Volume", 00:09:47.499 "block_size": 512, 00:09:47.499 "num_blocks": 65536, 00:09:47.499 "uuid": "c2f11ef8-0588-415a-b81c-4e2c2879de1c", 00:09:47.499 "assigned_rate_limits": { 00:09:47.499 "rw_ios_per_sec": 0, 00:09:47.499 "rw_mbytes_per_sec": 0, 00:09:47.499 "r_mbytes_per_sec": 0, 00:09:47.499 "w_mbytes_per_sec": 0 00:09:47.499 }, 00:09:47.499 "claimed": false, 00:09:47.499 "zoned": false, 00:09:47.499 "supported_io_types": { 00:09:47.499 "read": true, 00:09:47.499 "write": true, 00:09:47.499 "unmap": false, 00:09:47.499 "flush": false, 00:09:47.499 "reset": true, 00:09:47.499 "nvme_admin": false, 00:09:47.499 "nvme_io": false, 00:09:47.499 "nvme_io_md": false, 00:09:47.499 "write_zeroes": true, 00:09:47.499 "zcopy": false, 00:09:47.499 "get_zone_info": false, 00:09:47.499 "zone_management": false, 00:09:47.499 "zone_append": false, 00:09:47.499 "compare": false, 00:09:47.499 "compare_and_write": false, 00:09:47.499 "abort": false, 00:09:47.499 "seek_hole": false, 00:09:47.499 "seek_data": false, 00:09:47.499 "copy": false, 00:09:47.499 "nvme_iov_md": false 00:09:47.499 }, 00:09:47.499 "memory_domains": [ 00:09:47.499 { 00:09:47.499 "dma_device_id": "system", 00:09:47.499 "dma_device_type": 1 00:09:47.499 }, 00:09:47.499 { 00:09:47.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.499 "dma_device_type": 2 00:09:47.499 }, 00:09:47.499 { 00:09:47.499 "dma_device_id": "system", 00:09:47.499 "dma_device_type": 1 00:09:47.499 }, 00:09:47.499 { 00:09:47.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.499 "dma_device_type": 2 00:09:47.499 }, 00:09:47.499 { 00:09:47.499 "dma_device_id": "system", 00:09:47.499 "dma_device_type": 1 00:09:47.499 }, 00:09:47.499 { 00:09:47.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.499 "dma_device_type": 2 00:09:47.499 } 00:09:47.499 ], 00:09:47.499 "driver_specific": { 00:09:47.499 "raid": { 00:09:47.499 "uuid": "c2f11ef8-0588-415a-b81c-4e2c2879de1c", 00:09:47.499 "strip_size_kb": 0, 00:09:47.499 "state": "online", 00:09:47.499 "raid_level": "raid1", 00:09:47.499 "superblock": false, 00:09:47.499 "num_base_bdevs": 3, 00:09:47.499 "num_base_bdevs_discovered": 3, 00:09:47.499 "num_base_bdevs_operational": 3, 00:09:47.499 "base_bdevs_list": [ 00:09:47.499 { 00:09:47.499 "name": "BaseBdev1", 00:09:47.499 "uuid": "81586b6b-26ec-4951-b663-bdd0abe9c178", 00:09:47.499 "is_configured": true, 00:09:47.499 "data_offset": 0, 00:09:47.499 "data_size": 65536 00:09:47.499 }, 00:09:47.499 { 00:09:47.499 "name": "BaseBdev2", 00:09:47.499 "uuid": "279c5e1e-e5ac-49e4-b6b0-63e733c12afe", 00:09:47.499 "is_configured": true, 00:09:47.499 "data_offset": 0, 00:09:47.499 "data_size": 65536 00:09:47.499 }, 00:09:47.499 { 00:09:47.499 "name": "BaseBdev3", 00:09:47.499 "uuid": "5ee2745d-eef0-4abc-86e5-c15bca85750b", 00:09:47.499 "is_configured": true, 00:09:47.499 "data_offset": 0, 00:09:47.499 "data_size": 65536 00:09:47.499 } 00:09:47.499 ] 00:09:47.499 } 00:09:47.499 } 00:09:47.499 }' 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:47.499 BaseBdev2 00:09:47.499 BaseBdev3' 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.499 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.500 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.758 [2024-11-15 10:38:08.785406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.758 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.759 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.759 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.759 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.759 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.759 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.016 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.016 "name": "Existed_Raid", 00:09:48.016 "uuid": "c2f11ef8-0588-415a-b81c-4e2c2879de1c", 00:09:48.016 "strip_size_kb": 0, 00:09:48.016 "state": "online", 00:09:48.016 "raid_level": "raid1", 00:09:48.016 "superblock": false, 00:09:48.016 "num_base_bdevs": 3, 00:09:48.016 "num_base_bdevs_discovered": 2, 00:09:48.016 "num_base_bdevs_operational": 2, 00:09:48.016 "base_bdevs_list": [ 00:09:48.016 { 00:09:48.016 "name": null, 00:09:48.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.016 "is_configured": false, 00:09:48.016 "data_offset": 0, 00:09:48.016 "data_size": 65536 00:09:48.016 }, 00:09:48.016 { 00:09:48.016 "name": "BaseBdev2", 00:09:48.016 "uuid": "279c5e1e-e5ac-49e4-b6b0-63e733c12afe", 00:09:48.016 "is_configured": true, 00:09:48.016 "data_offset": 0, 00:09:48.016 "data_size": 65536 00:09:48.016 }, 00:09:48.016 { 00:09:48.016 "name": "BaseBdev3", 00:09:48.016 "uuid": "5ee2745d-eef0-4abc-86e5-c15bca85750b", 00:09:48.016 "is_configured": true, 00:09:48.016 "data_offset": 0, 00:09:48.016 "data_size": 65536 00:09:48.016 } 00:09:48.016 ] 00:09:48.016 }' 00:09:48.016 10:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.016 10:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.274 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:48.274 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.274 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.274 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:48.274 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.274 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.274 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.533 [2024-11-15 10:38:09.447403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.533 [2024-11-15 10:38:09.593750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:48.533 [2024-11-15 10:38:09.594019] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.533 [2024-11-15 10:38:09.680060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.533 [2024-11-15 10:38:09.680399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.533 [2024-11-15 10:38:09.680436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.533 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.792 BaseBdev2 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.792 [ 00:09:48.792 { 00:09:48.792 "name": "BaseBdev2", 00:09:48.792 "aliases": [ 00:09:48.792 "adb929cf-45c7-4885-afde-bc56eebb021c" 00:09:48.792 ], 00:09:48.792 "product_name": "Malloc disk", 00:09:48.792 "block_size": 512, 00:09:48.792 "num_blocks": 65536, 00:09:48.792 "uuid": "adb929cf-45c7-4885-afde-bc56eebb021c", 00:09:48.792 "assigned_rate_limits": { 00:09:48.792 "rw_ios_per_sec": 0, 00:09:48.792 "rw_mbytes_per_sec": 0, 00:09:48.792 "r_mbytes_per_sec": 0, 00:09:48.792 "w_mbytes_per_sec": 0 00:09:48.792 }, 00:09:48.792 "claimed": false, 00:09:48.792 "zoned": false, 00:09:48.792 "supported_io_types": { 00:09:48.792 "read": true, 00:09:48.792 "write": true, 00:09:48.792 "unmap": true, 00:09:48.792 "flush": true, 00:09:48.792 "reset": true, 00:09:48.792 "nvme_admin": false, 00:09:48.792 "nvme_io": false, 00:09:48.792 "nvme_io_md": false, 00:09:48.792 "write_zeroes": true, 00:09:48.792 "zcopy": true, 00:09:48.792 "get_zone_info": false, 00:09:48.792 "zone_management": false, 00:09:48.792 "zone_append": false, 00:09:48.792 "compare": false, 00:09:48.792 "compare_and_write": false, 00:09:48.792 "abort": true, 00:09:48.792 "seek_hole": false, 00:09:48.792 "seek_data": false, 00:09:48.792 "copy": true, 00:09:48.792 "nvme_iov_md": false 00:09:48.792 }, 00:09:48.792 "memory_domains": [ 00:09:48.792 { 00:09:48.792 "dma_device_id": "system", 00:09:48.792 "dma_device_type": 1 00:09:48.792 }, 00:09:48.792 { 00:09:48.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.792 "dma_device_type": 2 00:09:48.792 } 00:09:48.792 ], 00:09:48.792 "driver_specific": {} 00:09:48.792 } 00:09:48.792 ] 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.792 BaseBdev3 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.792 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.792 [ 00:09:48.792 { 00:09:48.792 "name": "BaseBdev3", 00:09:48.792 "aliases": [ 00:09:48.792 "084b4148-bdbe-44b0-8120-c9cfab9c1757" 00:09:48.792 ], 00:09:48.792 "product_name": "Malloc disk", 00:09:48.792 "block_size": 512, 00:09:48.792 "num_blocks": 65536, 00:09:48.792 "uuid": "084b4148-bdbe-44b0-8120-c9cfab9c1757", 00:09:48.792 "assigned_rate_limits": { 00:09:48.792 "rw_ios_per_sec": 0, 00:09:48.792 "rw_mbytes_per_sec": 0, 00:09:48.792 "r_mbytes_per_sec": 0, 00:09:48.792 "w_mbytes_per_sec": 0 00:09:48.792 }, 00:09:48.792 "claimed": false, 00:09:48.793 "zoned": false, 00:09:48.793 "supported_io_types": { 00:09:48.793 "read": true, 00:09:48.793 "write": true, 00:09:48.793 "unmap": true, 00:09:48.793 "flush": true, 00:09:48.793 "reset": true, 00:09:48.793 "nvme_admin": false, 00:09:48.793 "nvme_io": false, 00:09:48.793 "nvme_io_md": false, 00:09:48.793 "write_zeroes": true, 00:09:48.793 "zcopy": true, 00:09:48.793 "get_zone_info": false, 00:09:48.793 "zone_management": false, 00:09:48.793 "zone_append": false, 00:09:48.793 "compare": false, 00:09:48.793 "compare_and_write": false, 00:09:48.793 "abort": true, 00:09:48.793 "seek_hole": false, 00:09:48.793 "seek_data": false, 00:09:48.793 "copy": true, 00:09:48.793 "nvme_iov_md": false 00:09:48.793 }, 00:09:48.793 "memory_domains": [ 00:09:48.793 { 00:09:48.793 "dma_device_id": "system", 00:09:48.793 "dma_device_type": 1 00:09:48.793 }, 00:09:48.793 { 00:09:48.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.793 "dma_device_type": 2 00:09:48.793 } 00:09:48.793 ], 00:09:48.793 "driver_specific": {} 00:09:48.793 } 00:09:48.793 ] 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.793 [2024-11-15 10:38:09.881620] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.793 [2024-11-15 10:38:09.881813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.793 [2024-11-15 10:38:09.881951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.793 [2024-11-15 10:38:09.884406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.793 "name": "Existed_Raid", 00:09:48.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.793 "strip_size_kb": 0, 00:09:48.793 "state": "configuring", 00:09:48.793 "raid_level": "raid1", 00:09:48.793 "superblock": false, 00:09:48.793 "num_base_bdevs": 3, 00:09:48.793 "num_base_bdevs_discovered": 2, 00:09:48.793 "num_base_bdevs_operational": 3, 00:09:48.793 "base_bdevs_list": [ 00:09:48.793 { 00:09:48.793 "name": "BaseBdev1", 00:09:48.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.793 "is_configured": false, 00:09:48.793 "data_offset": 0, 00:09:48.793 "data_size": 0 00:09:48.793 }, 00:09:48.793 { 00:09:48.793 "name": "BaseBdev2", 00:09:48.793 "uuid": "adb929cf-45c7-4885-afde-bc56eebb021c", 00:09:48.793 "is_configured": true, 00:09:48.793 "data_offset": 0, 00:09:48.793 "data_size": 65536 00:09:48.793 }, 00:09:48.793 { 00:09:48.793 "name": "BaseBdev3", 00:09:48.793 "uuid": "084b4148-bdbe-44b0-8120-c9cfab9c1757", 00:09:48.793 "is_configured": true, 00:09:48.793 "data_offset": 0, 00:09:48.793 "data_size": 65536 00:09:48.793 } 00:09:48.793 ] 00:09:48.793 }' 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.793 10:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.358 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.359 [2024-11-15 10:38:10.401877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.359 "name": "Existed_Raid", 00:09:49.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.359 "strip_size_kb": 0, 00:09:49.359 "state": "configuring", 00:09:49.359 "raid_level": "raid1", 00:09:49.359 "superblock": false, 00:09:49.359 "num_base_bdevs": 3, 00:09:49.359 "num_base_bdevs_discovered": 1, 00:09:49.359 "num_base_bdevs_operational": 3, 00:09:49.359 "base_bdevs_list": [ 00:09:49.359 { 00:09:49.359 "name": "BaseBdev1", 00:09:49.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.359 "is_configured": false, 00:09:49.359 "data_offset": 0, 00:09:49.359 "data_size": 0 00:09:49.359 }, 00:09:49.359 { 00:09:49.359 "name": null, 00:09:49.359 "uuid": "adb929cf-45c7-4885-afde-bc56eebb021c", 00:09:49.359 "is_configured": false, 00:09:49.359 "data_offset": 0, 00:09:49.359 "data_size": 65536 00:09:49.359 }, 00:09:49.359 { 00:09:49.359 "name": "BaseBdev3", 00:09:49.359 "uuid": "084b4148-bdbe-44b0-8120-c9cfab9c1757", 00:09:49.359 "is_configured": true, 00:09:49.359 "data_offset": 0, 00:09:49.359 "data_size": 65536 00:09:49.359 } 00:09:49.359 ] 00:09:49.359 }' 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.359 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.011 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.011 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:50.011 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.011 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.011 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.011 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:50.011 10:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:50.011 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.011 10:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.011 [2024-11-15 10:38:11.008692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.011 BaseBdev1 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.011 [ 00:09:50.011 { 00:09:50.011 "name": "BaseBdev1", 00:09:50.011 "aliases": [ 00:09:50.011 "dae08ac7-12d9-466f-a392-867fe00454f6" 00:09:50.011 ], 00:09:50.011 "product_name": "Malloc disk", 00:09:50.011 "block_size": 512, 00:09:50.011 "num_blocks": 65536, 00:09:50.011 "uuid": "dae08ac7-12d9-466f-a392-867fe00454f6", 00:09:50.011 "assigned_rate_limits": { 00:09:50.011 "rw_ios_per_sec": 0, 00:09:50.011 "rw_mbytes_per_sec": 0, 00:09:50.011 "r_mbytes_per_sec": 0, 00:09:50.011 "w_mbytes_per_sec": 0 00:09:50.011 }, 00:09:50.011 "claimed": true, 00:09:50.011 "claim_type": "exclusive_write", 00:09:50.011 "zoned": false, 00:09:50.011 "supported_io_types": { 00:09:50.011 "read": true, 00:09:50.011 "write": true, 00:09:50.011 "unmap": true, 00:09:50.011 "flush": true, 00:09:50.011 "reset": true, 00:09:50.011 "nvme_admin": false, 00:09:50.011 "nvme_io": false, 00:09:50.011 "nvme_io_md": false, 00:09:50.011 "write_zeroes": true, 00:09:50.011 "zcopy": true, 00:09:50.011 "get_zone_info": false, 00:09:50.011 "zone_management": false, 00:09:50.011 "zone_append": false, 00:09:50.011 "compare": false, 00:09:50.011 "compare_and_write": false, 00:09:50.011 "abort": true, 00:09:50.011 "seek_hole": false, 00:09:50.011 "seek_data": false, 00:09:50.011 "copy": true, 00:09:50.011 "nvme_iov_md": false 00:09:50.011 }, 00:09:50.011 "memory_domains": [ 00:09:50.011 { 00:09:50.011 "dma_device_id": "system", 00:09:50.011 "dma_device_type": 1 00:09:50.011 }, 00:09:50.011 { 00:09:50.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.011 "dma_device_type": 2 00:09:50.011 } 00:09:50.011 ], 00:09:50.011 "driver_specific": {} 00:09:50.011 } 00:09:50.011 ] 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.011 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.011 "name": "Existed_Raid", 00:09:50.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.011 "strip_size_kb": 0, 00:09:50.011 "state": "configuring", 00:09:50.011 "raid_level": "raid1", 00:09:50.011 "superblock": false, 00:09:50.011 "num_base_bdevs": 3, 00:09:50.011 "num_base_bdevs_discovered": 2, 00:09:50.011 "num_base_bdevs_operational": 3, 00:09:50.011 "base_bdevs_list": [ 00:09:50.011 { 00:09:50.011 "name": "BaseBdev1", 00:09:50.011 "uuid": "dae08ac7-12d9-466f-a392-867fe00454f6", 00:09:50.011 "is_configured": true, 00:09:50.011 "data_offset": 0, 00:09:50.011 "data_size": 65536 00:09:50.011 }, 00:09:50.011 { 00:09:50.011 "name": null, 00:09:50.012 "uuid": "adb929cf-45c7-4885-afde-bc56eebb021c", 00:09:50.012 "is_configured": false, 00:09:50.012 "data_offset": 0, 00:09:50.012 "data_size": 65536 00:09:50.012 }, 00:09:50.012 { 00:09:50.012 "name": "BaseBdev3", 00:09:50.012 "uuid": "084b4148-bdbe-44b0-8120-c9cfab9c1757", 00:09:50.012 "is_configured": true, 00:09:50.012 "data_offset": 0, 00:09:50.012 "data_size": 65536 00:09:50.012 } 00:09:50.012 ] 00:09:50.012 }' 00:09:50.012 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.012 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.579 [2024-11-15 10:38:11.608886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.579 "name": "Existed_Raid", 00:09:50.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.579 "strip_size_kb": 0, 00:09:50.579 "state": "configuring", 00:09:50.579 "raid_level": "raid1", 00:09:50.579 "superblock": false, 00:09:50.579 "num_base_bdevs": 3, 00:09:50.579 "num_base_bdevs_discovered": 1, 00:09:50.579 "num_base_bdevs_operational": 3, 00:09:50.579 "base_bdevs_list": [ 00:09:50.579 { 00:09:50.579 "name": "BaseBdev1", 00:09:50.579 "uuid": "dae08ac7-12d9-466f-a392-867fe00454f6", 00:09:50.579 "is_configured": true, 00:09:50.579 "data_offset": 0, 00:09:50.579 "data_size": 65536 00:09:50.579 }, 00:09:50.579 { 00:09:50.579 "name": null, 00:09:50.579 "uuid": "adb929cf-45c7-4885-afde-bc56eebb021c", 00:09:50.579 "is_configured": false, 00:09:50.579 "data_offset": 0, 00:09:50.579 "data_size": 65536 00:09:50.579 }, 00:09:50.579 { 00:09:50.579 "name": null, 00:09:50.579 "uuid": "084b4148-bdbe-44b0-8120-c9cfab9c1757", 00:09:50.579 "is_configured": false, 00:09:50.579 "data_offset": 0, 00:09:50.579 "data_size": 65536 00:09:50.579 } 00:09:50.579 ] 00:09:50.579 }' 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.579 10:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.145 [2024-11-15 10:38:12.169112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.145 "name": "Existed_Raid", 00:09:51.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.145 "strip_size_kb": 0, 00:09:51.145 "state": "configuring", 00:09:51.145 "raid_level": "raid1", 00:09:51.145 "superblock": false, 00:09:51.145 "num_base_bdevs": 3, 00:09:51.145 "num_base_bdevs_discovered": 2, 00:09:51.145 "num_base_bdevs_operational": 3, 00:09:51.145 "base_bdevs_list": [ 00:09:51.145 { 00:09:51.145 "name": "BaseBdev1", 00:09:51.145 "uuid": "dae08ac7-12d9-466f-a392-867fe00454f6", 00:09:51.145 "is_configured": true, 00:09:51.145 "data_offset": 0, 00:09:51.145 "data_size": 65536 00:09:51.145 }, 00:09:51.145 { 00:09:51.145 "name": null, 00:09:51.145 "uuid": "adb929cf-45c7-4885-afde-bc56eebb021c", 00:09:51.145 "is_configured": false, 00:09:51.145 "data_offset": 0, 00:09:51.145 "data_size": 65536 00:09:51.145 }, 00:09:51.145 { 00:09:51.145 "name": "BaseBdev3", 00:09:51.145 "uuid": "084b4148-bdbe-44b0-8120-c9cfab9c1757", 00:09:51.145 "is_configured": true, 00:09:51.145 "data_offset": 0, 00:09:51.145 "data_size": 65536 00:09:51.145 } 00:09:51.145 ] 00:09:51.145 }' 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.145 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.709 [2024-11-15 10:38:12.741252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.709 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.966 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.966 "name": "Existed_Raid", 00:09:51.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.966 "strip_size_kb": 0, 00:09:51.966 "state": "configuring", 00:09:51.966 "raid_level": "raid1", 00:09:51.966 "superblock": false, 00:09:51.966 "num_base_bdevs": 3, 00:09:51.966 "num_base_bdevs_discovered": 1, 00:09:51.966 "num_base_bdevs_operational": 3, 00:09:51.966 "base_bdevs_list": [ 00:09:51.966 { 00:09:51.966 "name": null, 00:09:51.966 "uuid": "dae08ac7-12d9-466f-a392-867fe00454f6", 00:09:51.966 "is_configured": false, 00:09:51.966 "data_offset": 0, 00:09:51.966 "data_size": 65536 00:09:51.966 }, 00:09:51.966 { 00:09:51.966 "name": null, 00:09:51.966 "uuid": "adb929cf-45c7-4885-afde-bc56eebb021c", 00:09:51.966 "is_configured": false, 00:09:51.966 "data_offset": 0, 00:09:51.966 "data_size": 65536 00:09:51.966 }, 00:09:51.966 { 00:09:51.966 "name": "BaseBdev3", 00:09:51.966 "uuid": "084b4148-bdbe-44b0-8120-c9cfab9c1757", 00:09:51.966 "is_configured": true, 00:09:51.966 "data_offset": 0, 00:09:51.966 "data_size": 65536 00:09:51.966 } 00:09:51.966 ] 00:09:51.966 }' 00:09:51.966 10:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.966 10:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.225 [2024-11-15 10:38:13.375854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.225 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.483 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.483 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.483 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.483 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.483 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.483 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.483 "name": "Existed_Raid", 00:09:52.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.483 "strip_size_kb": 0, 00:09:52.483 "state": "configuring", 00:09:52.483 "raid_level": "raid1", 00:09:52.483 "superblock": false, 00:09:52.483 "num_base_bdevs": 3, 00:09:52.483 "num_base_bdevs_discovered": 2, 00:09:52.483 "num_base_bdevs_operational": 3, 00:09:52.483 "base_bdevs_list": [ 00:09:52.483 { 00:09:52.483 "name": null, 00:09:52.483 "uuid": "dae08ac7-12d9-466f-a392-867fe00454f6", 00:09:52.483 "is_configured": false, 00:09:52.483 "data_offset": 0, 00:09:52.483 "data_size": 65536 00:09:52.483 }, 00:09:52.483 { 00:09:52.483 "name": "BaseBdev2", 00:09:52.483 "uuid": "adb929cf-45c7-4885-afde-bc56eebb021c", 00:09:52.483 "is_configured": true, 00:09:52.483 "data_offset": 0, 00:09:52.483 "data_size": 65536 00:09:52.483 }, 00:09:52.483 { 00:09:52.483 "name": "BaseBdev3", 00:09:52.483 "uuid": "084b4148-bdbe-44b0-8120-c9cfab9c1757", 00:09:52.483 "is_configured": true, 00:09:52.483 "data_offset": 0, 00:09:52.483 "data_size": 65536 00:09:52.483 } 00:09:52.483 ] 00:09:52.483 }' 00:09:52.483 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.483 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.741 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:52.741 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.741 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.741 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.741 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.999 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:52.999 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:52.999 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.999 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.999 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.999 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.999 10:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dae08ac7-12d9-466f-a392-867fe00454f6 00:09:52.999 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.999 10:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.999 [2024-11-15 10:38:14.022436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:52.999 NewBaseBdev 00:09:52.999 [2024-11-15 10:38:14.022717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:52.999 [2024-11-15 10:38:14.022741] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:52.999 [2024-11-15 10:38:14.023065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:52.999 [2024-11-15 10:38:14.023280] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:52.999 [2024-11-15 10:38:14.023303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:52.999 [2024-11-15 10:38:14.023680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.999 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.999 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:52.999 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:52.999 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.999 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:52.999 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.999 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.999 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.999 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.999 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.999 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.999 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:52.999 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.999 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.999 [ 00:09:52.999 { 00:09:52.999 "name": "NewBaseBdev", 00:09:52.999 "aliases": [ 00:09:52.999 "dae08ac7-12d9-466f-a392-867fe00454f6" 00:09:52.999 ], 00:09:52.999 "product_name": "Malloc disk", 00:09:52.999 "block_size": 512, 00:09:52.999 "num_blocks": 65536, 00:09:52.999 "uuid": "dae08ac7-12d9-466f-a392-867fe00454f6", 00:09:52.999 "assigned_rate_limits": { 00:09:52.999 "rw_ios_per_sec": 0, 00:09:52.999 "rw_mbytes_per_sec": 0, 00:09:52.999 "r_mbytes_per_sec": 0, 00:09:52.999 "w_mbytes_per_sec": 0 00:09:52.999 }, 00:09:52.999 "claimed": true, 00:09:52.999 "claim_type": "exclusive_write", 00:09:52.999 "zoned": false, 00:09:52.999 "supported_io_types": { 00:09:52.999 "read": true, 00:09:52.999 "write": true, 00:09:52.999 "unmap": true, 00:09:52.999 "flush": true, 00:09:52.999 "reset": true, 00:09:52.999 "nvme_admin": false, 00:09:52.999 "nvme_io": false, 00:09:53.000 "nvme_io_md": false, 00:09:53.000 "write_zeroes": true, 00:09:53.000 "zcopy": true, 00:09:53.000 "get_zone_info": false, 00:09:53.000 "zone_management": false, 00:09:53.000 "zone_append": false, 00:09:53.000 "compare": false, 00:09:53.000 "compare_and_write": false, 00:09:53.000 "abort": true, 00:09:53.000 "seek_hole": false, 00:09:53.000 "seek_data": false, 00:09:53.000 "copy": true, 00:09:53.000 "nvme_iov_md": false 00:09:53.000 }, 00:09:53.000 "memory_domains": [ 00:09:53.000 { 00:09:53.000 "dma_device_id": "system", 00:09:53.000 "dma_device_type": 1 00:09:53.000 }, 00:09:53.000 { 00:09:53.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.000 "dma_device_type": 2 00:09:53.000 } 00:09:53.000 ], 00:09:53.000 "driver_specific": {} 00:09:53.000 } 00:09:53.000 ] 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.000 "name": "Existed_Raid", 00:09:53.000 "uuid": "d8bd4693-3890-4f0f-855a-b8caf75335b6", 00:09:53.000 "strip_size_kb": 0, 00:09:53.000 "state": "online", 00:09:53.000 "raid_level": "raid1", 00:09:53.000 "superblock": false, 00:09:53.000 "num_base_bdevs": 3, 00:09:53.000 "num_base_bdevs_discovered": 3, 00:09:53.000 "num_base_bdevs_operational": 3, 00:09:53.000 "base_bdevs_list": [ 00:09:53.000 { 00:09:53.000 "name": "NewBaseBdev", 00:09:53.000 "uuid": "dae08ac7-12d9-466f-a392-867fe00454f6", 00:09:53.000 "is_configured": true, 00:09:53.000 "data_offset": 0, 00:09:53.000 "data_size": 65536 00:09:53.000 }, 00:09:53.000 { 00:09:53.000 "name": "BaseBdev2", 00:09:53.000 "uuid": "adb929cf-45c7-4885-afde-bc56eebb021c", 00:09:53.000 "is_configured": true, 00:09:53.000 "data_offset": 0, 00:09:53.000 "data_size": 65536 00:09:53.000 }, 00:09:53.000 { 00:09:53.000 "name": "BaseBdev3", 00:09:53.000 "uuid": "084b4148-bdbe-44b0-8120-c9cfab9c1757", 00:09:53.000 "is_configured": true, 00:09:53.000 "data_offset": 0, 00:09:53.000 "data_size": 65536 00:09:53.000 } 00:09:53.000 ] 00:09:53.000 }' 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.000 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.567 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:53.567 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:53.567 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.567 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.567 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.567 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.567 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:53.567 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.567 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.567 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.567 [2024-11-15 10:38:14.570994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.567 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.567 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.567 "name": "Existed_Raid", 00:09:53.567 "aliases": [ 00:09:53.567 "d8bd4693-3890-4f0f-855a-b8caf75335b6" 00:09:53.567 ], 00:09:53.567 "product_name": "Raid Volume", 00:09:53.567 "block_size": 512, 00:09:53.567 "num_blocks": 65536, 00:09:53.567 "uuid": "d8bd4693-3890-4f0f-855a-b8caf75335b6", 00:09:53.568 "assigned_rate_limits": { 00:09:53.568 "rw_ios_per_sec": 0, 00:09:53.568 "rw_mbytes_per_sec": 0, 00:09:53.568 "r_mbytes_per_sec": 0, 00:09:53.568 "w_mbytes_per_sec": 0 00:09:53.568 }, 00:09:53.568 "claimed": false, 00:09:53.568 "zoned": false, 00:09:53.568 "supported_io_types": { 00:09:53.568 "read": true, 00:09:53.568 "write": true, 00:09:53.568 "unmap": false, 00:09:53.568 "flush": false, 00:09:53.568 "reset": true, 00:09:53.568 "nvme_admin": false, 00:09:53.568 "nvme_io": false, 00:09:53.568 "nvme_io_md": false, 00:09:53.568 "write_zeroes": true, 00:09:53.568 "zcopy": false, 00:09:53.568 "get_zone_info": false, 00:09:53.568 "zone_management": false, 00:09:53.568 "zone_append": false, 00:09:53.568 "compare": false, 00:09:53.568 "compare_and_write": false, 00:09:53.568 "abort": false, 00:09:53.568 "seek_hole": false, 00:09:53.568 "seek_data": false, 00:09:53.568 "copy": false, 00:09:53.568 "nvme_iov_md": false 00:09:53.568 }, 00:09:53.568 "memory_domains": [ 00:09:53.568 { 00:09:53.568 "dma_device_id": "system", 00:09:53.568 "dma_device_type": 1 00:09:53.568 }, 00:09:53.568 { 00:09:53.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.568 "dma_device_type": 2 00:09:53.568 }, 00:09:53.568 { 00:09:53.568 "dma_device_id": "system", 00:09:53.568 "dma_device_type": 1 00:09:53.568 }, 00:09:53.568 { 00:09:53.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.568 "dma_device_type": 2 00:09:53.568 }, 00:09:53.568 { 00:09:53.568 "dma_device_id": "system", 00:09:53.568 "dma_device_type": 1 00:09:53.568 }, 00:09:53.568 { 00:09:53.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.568 "dma_device_type": 2 00:09:53.568 } 00:09:53.568 ], 00:09:53.568 "driver_specific": { 00:09:53.568 "raid": { 00:09:53.568 "uuid": "d8bd4693-3890-4f0f-855a-b8caf75335b6", 00:09:53.568 "strip_size_kb": 0, 00:09:53.568 "state": "online", 00:09:53.568 "raid_level": "raid1", 00:09:53.568 "superblock": false, 00:09:53.568 "num_base_bdevs": 3, 00:09:53.568 "num_base_bdevs_discovered": 3, 00:09:53.568 "num_base_bdevs_operational": 3, 00:09:53.568 "base_bdevs_list": [ 00:09:53.568 { 00:09:53.568 "name": "NewBaseBdev", 00:09:53.568 "uuid": "dae08ac7-12d9-466f-a392-867fe00454f6", 00:09:53.568 "is_configured": true, 00:09:53.568 "data_offset": 0, 00:09:53.568 "data_size": 65536 00:09:53.568 }, 00:09:53.568 { 00:09:53.568 "name": "BaseBdev2", 00:09:53.568 "uuid": "adb929cf-45c7-4885-afde-bc56eebb021c", 00:09:53.568 "is_configured": true, 00:09:53.568 "data_offset": 0, 00:09:53.568 "data_size": 65536 00:09:53.568 }, 00:09:53.568 { 00:09:53.568 "name": "BaseBdev3", 00:09:53.568 "uuid": "084b4148-bdbe-44b0-8120-c9cfab9c1757", 00:09:53.568 "is_configured": true, 00:09:53.568 "data_offset": 0, 00:09:53.568 "data_size": 65536 00:09:53.568 } 00:09:53.568 ] 00:09:53.568 } 00:09:53.568 } 00:09:53.568 }' 00:09:53.568 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.568 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:53.568 BaseBdev2 00:09:53.568 BaseBdev3' 00:09:53.568 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.826 [2024-11-15 10:38:14.902687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.826 [2024-11-15 10:38:14.902852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.826 [2024-11-15 10:38:14.903046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.826 [2024-11-15 10:38:14.903536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.826 [2024-11-15 10:38:14.903669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67428 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67428 ']' 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67428 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67428 00:09:53.826 killing process with pid 67428 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67428' 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67428 00:09:53.826 [2024-11-15 10:38:14.938793] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.826 10:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67428 00:09:54.086 [2024-11-15 10:38:15.198670] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.468 ************************************ 00:09:55.468 END TEST raid_state_function_test 00:09:55.468 ************************************ 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:55.468 00:09:55.468 real 0m11.778s 00:09:55.468 user 0m19.596s 00:09:55.468 sys 0m1.531s 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.468 10:38:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:55.468 10:38:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:55.468 10:38:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.468 10:38:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.468 ************************************ 00:09:55.468 START TEST raid_state_function_test_sb 00:09:55.468 ************************************ 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:55.468 Process raid pid: 68060 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68060 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68060' 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68060 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68060 ']' 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.468 10:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.468 [2024-11-15 10:38:16.429649] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:09:55.468 [2024-11-15 10:38:16.430055] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.468 [2024-11-15 10:38:16.615626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.727 [2024-11-15 10:38:16.749312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.986 [2024-11-15 10:38:16.957234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.986 [2024-11-15 10:38:16.957539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.244 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.244 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:56.244 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:56.244 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.244 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.244 [2024-11-15 10:38:17.377392] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.244 [2024-11-15 10:38:17.377475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.244 [2024-11-15 10:38:17.377493] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.244 [2024-11-15 10:38:17.377667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.244 [2024-11-15 10:38:17.377697] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.244 [2024-11-15 10:38:17.377717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.244 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.244 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:56.244 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.245 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.245 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.245 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.245 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.245 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.245 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.245 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.245 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.245 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.245 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.245 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.245 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.503 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.503 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.503 "name": "Existed_Raid", 00:09:56.503 "uuid": "9550afe5-5f17-4031-83a4-7d3c7d59498d", 00:09:56.503 "strip_size_kb": 0, 00:09:56.503 "state": "configuring", 00:09:56.503 "raid_level": "raid1", 00:09:56.503 "superblock": true, 00:09:56.503 "num_base_bdevs": 3, 00:09:56.503 "num_base_bdevs_discovered": 0, 00:09:56.503 "num_base_bdevs_operational": 3, 00:09:56.503 "base_bdevs_list": [ 00:09:56.503 { 00:09:56.503 "name": "BaseBdev1", 00:09:56.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.503 "is_configured": false, 00:09:56.503 "data_offset": 0, 00:09:56.503 "data_size": 0 00:09:56.503 }, 00:09:56.503 { 00:09:56.503 "name": "BaseBdev2", 00:09:56.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.503 "is_configured": false, 00:09:56.503 "data_offset": 0, 00:09:56.503 "data_size": 0 00:09:56.503 }, 00:09:56.503 { 00:09:56.503 "name": "BaseBdev3", 00:09:56.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.503 "is_configured": false, 00:09:56.503 "data_offset": 0, 00:09:56.503 "data_size": 0 00:09:56.503 } 00:09:56.503 ] 00:09:56.503 }' 00:09:56.503 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.503 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.761 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.761 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.761 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.761 [2024-11-15 10:38:17.889425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.761 [2024-11-15 10:38:17.889621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:56.761 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.761 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:56.761 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.761 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.761 [2024-11-15 10:38:17.897413] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.761 [2024-11-15 10:38:17.897609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.761 [2024-11-15 10:38:17.897637] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.761 [2024-11-15 10:38:17.897655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.761 [2024-11-15 10:38:17.897665] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.761 [2024-11-15 10:38:17.897680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.761 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.761 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:56.761 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.761 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.019 [2024-11-15 10:38:17.942336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.019 BaseBdev1 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.019 [ 00:09:57.019 { 00:09:57.019 "name": "BaseBdev1", 00:09:57.019 "aliases": [ 00:09:57.019 "3d7cc15d-a2fc-4633-9a02-9e4d0442f464" 00:09:57.019 ], 00:09:57.019 "product_name": "Malloc disk", 00:09:57.019 "block_size": 512, 00:09:57.019 "num_blocks": 65536, 00:09:57.019 "uuid": "3d7cc15d-a2fc-4633-9a02-9e4d0442f464", 00:09:57.019 "assigned_rate_limits": { 00:09:57.019 "rw_ios_per_sec": 0, 00:09:57.019 "rw_mbytes_per_sec": 0, 00:09:57.019 "r_mbytes_per_sec": 0, 00:09:57.019 "w_mbytes_per_sec": 0 00:09:57.019 }, 00:09:57.019 "claimed": true, 00:09:57.019 "claim_type": "exclusive_write", 00:09:57.019 "zoned": false, 00:09:57.019 "supported_io_types": { 00:09:57.019 "read": true, 00:09:57.019 "write": true, 00:09:57.019 "unmap": true, 00:09:57.019 "flush": true, 00:09:57.019 "reset": true, 00:09:57.019 "nvme_admin": false, 00:09:57.019 "nvme_io": false, 00:09:57.019 "nvme_io_md": false, 00:09:57.019 "write_zeroes": true, 00:09:57.019 "zcopy": true, 00:09:57.019 "get_zone_info": false, 00:09:57.019 "zone_management": false, 00:09:57.019 "zone_append": false, 00:09:57.019 "compare": false, 00:09:57.019 "compare_and_write": false, 00:09:57.019 "abort": true, 00:09:57.019 "seek_hole": false, 00:09:57.019 "seek_data": false, 00:09:57.019 "copy": true, 00:09:57.019 "nvme_iov_md": false 00:09:57.019 }, 00:09:57.019 "memory_domains": [ 00:09:57.019 { 00:09:57.019 "dma_device_id": "system", 00:09:57.019 "dma_device_type": 1 00:09:57.019 }, 00:09:57.019 { 00:09:57.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.019 "dma_device_type": 2 00:09:57.019 } 00:09:57.019 ], 00:09:57.019 "driver_specific": {} 00:09:57.019 } 00:09:57.019 ] 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.019 10:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.019 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.019 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.019 "name": "Existed_Raid", 00:09:57.019 "uuid": "c0062b22-6773-4b3f-a48c-588ee18eb750", 00:09:57.019 "strip_size_kb": 0, 00:09:57.019 "state": "configuring", 00:09:57.019 "raid_level": "raid1", 00:09:57.019 "superblock": true, 00:09:57.019 "num_base_bdevs": 3, 00:09:57.019 "num_base_bdevs_discovered": 1, 00:09:57.019 "num_base_bdevs_operational": 3, 00:09:57.019 "base_bdevs_list": [ 00:09:57.019 { 00:09:57.019 "name": "BaseBdev1", 00:09:57.019 "uuid": "3d7cc15d-a2fc-4633-9a02-9e4d0442f464", 00:09:57.019 "is_configured": true, 00:09:57.019 "data_offset": 2048, 00:09:57.019 "data_size": 63488 00:09:57.019 }, 00:09:57.019 { 00:09:57.019 "name": "BaseBdev2", 00:09:57.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.019 "is_configured": false, 00:09:57.019 "data_offset": 0, 00:09:57.019 "data_size": 0 00:09:57.019 }, 00:09:57.019 { 00:09:57.019 "name": "BaseBdev3", 00:09:57.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.020 "is_configured": false, 00:09:57.020 "data_offset": 0, 00:09:57.020 "data_size": 0 00:09:57.020 } 00:09:57.020 ] 00:09:57.020 }' 00:09:57.020 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.020 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.585 [2024-11-15 10:38:18.486537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.585 [2024-11-15 10:38:18.486734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.585 [2024-11-15 10:38:18.494584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.585 [2024-11-15 10:38:18.497158] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.585 [2024-11-15 10:38:18.497335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.585 [2024-11-15 10:38:18.497459] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.585 [2024-11-15 10:38:18.497607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.585 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.586 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.586 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.586 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.586 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.586 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.586 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.586 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.586 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.586 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.586 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.586 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.586 "name": "Existed_Raid", 00:09:57.586 "uuid": "c652b4ad-d6c3-4732-b1e9-b759a6064c66", 00:09:57.586 "strip_size_kb": 0, 00:09:57.586 "state": "configuring", 00:09:57.586 "raid_level": "raid1", 00:09:57.586 "superblock": true, 00:09:57.586 "num_base_bdevs": 3, 00:09:57.586 "num_base_bdevs_discovered": 1, 00:09:57.586 "num_base_bdevs_operational": 3, 00:09:57.586 "base_bdevs_list": [ 00:09:57.586 { 00:09:57.586 "name": "BaseBdev1", 00:09:57.586 "uuid": "3d7cc15d-a2fc-4633-9a02-9e4d0442f464", 00:09:57.586 "is_configured": true, 00:09:57.586 "data_offset": 2048, 00:09:57.586 "data_size": 63488 00:09:57.586 }, 00:09:57.586 { 00:09:57.586 "name": "BaseBdev2", 00:09:57.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.586 "is_configured": false, 00:09:57.586 "data_offset": 0, 00:09:57.586 "data_size": 0 00:09:57.586 }, 00:09:57.586 { 00:09:57.586 "name": "BaseBdev3", 00:09:57.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.586 "is_configured": false, 00:09:57.586 "data_offset": 0, 00:09:57.586 "data_size": 0 00:09:57.586 } 00:09:57.586 ] 00:09:57.586 }' 00:09:57.586 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.586 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.844 10:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.844 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.844 10:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.105 [2024-11-15 10:38:19.025228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.105 BaseBdev2 00:09:58.105 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.105 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:58.105 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:58.105 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.105 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:58.105 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.105 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.105 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.105 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.105 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.105 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.105 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.105 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.105 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.105 [ 00:09:58.105 { 00:09:58.105 "name": "BaseBdev2", 00:09:58.105 "aliases": [ 00:09:58.105 "a59876cb-5a69-48bf-beb8-bff2559e2da2" 00:09:58.105 ], 00:09:58.105 "product_name": "Malloc disk", 00:09:58.105 "block_size": 512, 00:09:58.105 "num_blocks": 65536, 00:09:58.105 "uuid": "a59876cb-5a69-48bf-beb8-bff2559e2da2", 00:09:58.105 "assigned_rate_limits": { 00:09:58.106 "rw_ios_per_sec": 0, 00:09:58.106 "rw_mbytes_per_sec": 0, 00:09:58.106 "r_mbytes_per_sec": 0, 00:09:58.106 "w_mbytes_per_sec": 0 00:09:58.106 }, 00:09:58.106 "claimed": true, 00:09:58.106 "claim_type": "exclusive_write", 00:09:58.106 "zoned": false, 00:09:58.106 "supported_io_types": { 00:09:58.106 "read": true, 00:09:58.106 "write": true, 00:09:58.106 "unmap": true, 00:09:58.106 "flush": true, 00:09:58.106 "reset": true, 00:09:58.106 "nvme_admin": false, 00:09:58.106 "nvme_io": false, 00:09:58.106 "nvme_io_md": false, 00:09:58.106 "write_zeroes": true, 00:09:58.106 "zcopy": true, 00:09:58.106 "get_zone_info": false, 00:09:58.106 "zone_management": false, 00:09:58.106 "zone_append": false, 00:09:58.106 "compare": false, 00:09:58.106 "compare_and_write": false, 00:09:58.106 "abort": true, 00:09:58.106 "seek_hole": false, 00:09:58.106 "seek_data": false, 00:09:58.106 "copy": true, 00:09:58.106 "nvme_iov_md": false 00:09:58.106 }, 00:09:58.106 "memory_domains": [ 00:09:58.106 { 00:09:58.106 "dma_device_id": "system", 00:09:58.106 "dma_device_type": 1 00:09:58.106 }, 00:09:58.106 { 00:09:58.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.106 "dma_device_type": 2 00:09:58.106 } 00:09:58.106 ], 00:09:58.106 "driver_specific": {} 00:09:58.106 } 00:09:58.106 ] 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.106 "name": "Existed_Raid", 00:09:58.106 "uuid": "c652b4ad-d6c3-4732-b1e9-b759a6064c66", 00:09:58.106 "strip_size_kb": 0, 00:09:58.106 "state": "configuring", 00:09:58.106 "raid_level": "raid1", 00:09:58.106 "superblock": true, 00:09:58.106 "num_base_bdevs": 3, 00:09:58.106 "num_base_bdevs_discovered": 2, 00:09:58.106 "num_base_bdevs_operational": 3, 00:09:58.106 "base_bdevs_list": [ 00:09:58.106 { 00:09:58.106 "name": "BaseBdev1", 00:09:58.106 "uuid": "3d7cc15d-a2fc-4633-9a02-9e4d0442f464", 00:09:58.106 "is_configured": true, 00:09:58.106 "data_offset": 2048, 00:09:58.106 "data_size": 63488 00:09:58.106 }, 00:09:58.106 { 00:09:58.106 "name": "BaseBdev2", 00:09:58.106 "uuid": "a59876cb-5a69-48bf-beb8-bff2559e2da2", 00:09:58.106 "is_configured": true, 00:09:58.106 "data_offset": 2048, 00:09:58.106 "data_size": 63488 00:09:58.106 }, 00:09:58.106 { 00:09:58.106 "name": "BaseBdev3", 00:09:58.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.106 "is_configured": false, 00:09:58.106 "data_offset": 0, 00:09:58.106 "data_size": 0 00:09:58.106 } 00:09:58.106 ] 00:09:58.106 }' 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.106 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.678 [2024-11-15 10:38:19.656031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.678 [2024-11-15 10:38:19.656557] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:58.678 [2024-11-15 10:38:19.656596] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:58.678 BaseBdev3 00:09:58.678 [2024-11-15 10:38:19.656964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:58.678 [2024-11-15 10:38:19.657177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:58.678 [2024-11-15 10:38:19.657201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:58.678 [2024-11-15 10:38:19.657388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.678 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.678 [ 00:09:58.678 { 00:09:58.678 "name": "BaseBdev3", 00:09:58.678 "aliases": [ 00:09:58.678 "448f55e9-3340-4239-8677-02e398f9728e" 00:09:58.678 ], 00:09:58.678 "product_name": "Malloc disk", 00:09:58.678 "block_size": 512, 00:09:58.678 "num_blocks": 65536, 00:09:58.678 "uuid": "448f55e9-3340-4239-8677-02e398f9728e", 00:09:58.678 "assigned_rate_limits": { 00:09:58.678 "rw_ios_per_sec": 0, 00:09:58.678 "rw_mbytes_per_sec": 0, 00:09:58.678 "r_mbytes_per_sec": 0, 00:09:58.678 "w_mbytes_per_sec": 0 00:09:58.678 }, 00:09:58.679 "claimed": true, 00:09:58.679 "claim_type": "exclusive_write", 00:09:58.679 "zoned": false, 00:09:58.679 "supported_io_types": { 00:09:58.679 "read": true, 00:09:58.679 "write": true, 00:09:58.679 "unmap": true, 00:09:58.679 "flush": true, 00:09:58.679 "reset": true, 00:09:58.679 "nvme_admin": false, 00:09:58.679 "nvme_io": false, 00:09:58.679 "nvme_io_md": false, 00:09:58.679 "write_zeroes": true, 00:09:58.679 "zcopy": true, 00:09:58.679 "get_zone_info": false, 00:09:58.679 "zone_management": false, 00:09:58.679 "zone_append": false, 00:09:58.679 "compare": false, 00:09:58.679 "compare_and_write": false, 00:09:58.679 "abort": true, 00:09:58.679 "seek_hole": false, 00:09:58.679 "seek_data": false, 00:09:58.679 "copy": true, 00:09:58.679 "nvme_iov_md": false 00:09:58.679 }, 00:09:58.679 "memory_domains": [ 00:09:58.679 { 00:09:58.679 "dma_device_id": "system", 00:09:58.679 "dma_device_type": 1 00:09:58.679 }, 00:09:58.679 { 00:09:58.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.679 "dma_device_type": 2 00:09:58.679 } 00:09:58.679 ], 00:09:58.679 "driver_specific": {} 00:09:58.679 } 00:09:58.679 ] 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.679 "name": "Existed_Raid", 00:09:58.679 "uuid": "c652b4ad-d6c3-4732-b1e9-b759a6064c66", 00:09:58.679 "strip_size_kb": 0, 00:09:58.679 "state": "online", 00:09:58.679 "raid_level": "raid1", 00:09:58.679 "superblock": true, 00:09:58.679 "num_base_bdevs": 3, 00:09:58.679 "num_base_bdevs_discovered": 3, 00:09:58.679 "num_base_bdevs_operational": 3, 00:09:58.679 "base_bdevs_list": [ 00:09:58.679 { 00:09:58.679 "name": "BaseBdev1", 00:09:58.679 "uuid": "3d7cc15d-a2fc-4633-9a02-9e4d0442f464", 00:09:58.679 "is_configured": true, 00:09:58.679 "data_offset": 2048, 00:09:58.679 "data_size": 63488 00:09:58.679 }, 00:09:58.679 { 00:09:58.679 "name": "BaseBdev2", 00:09:58.679 "uuid": "a59876cb-5a69-48bf-beb8-bff2559e2da2", 00:09:58.679 "is_configured": true, 00:09:58.679 "data_offset": 2048, 00:09:58.679 "data_size": 63488 00:09:58.679 }, 00:09:58.679 { 00:09:58.679 "name": "BaseBdev3", 00:09:58.679 "uuid": "448f55e9-3340-4239-8677-02e398f9728e", 00:09:58.679 "is_configured": true, 00:09:58.679 "data_offset": 2048, 00:09:58.679 "data_size": 63488 00:09:58.679 } 00:09:58.679 ] 00:09:58.679 }' 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.679 10:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.246 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.246 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:59.246 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.246 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.246 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.246 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.246 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:59.246 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.246 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.246 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.246 [2024-11-15 10:38:20.172646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.246 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.246 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.246 "name": "Existed_Raid", 00:09:59.246 "aliases": [ 00:09:59.246 "c652b4ad-d6c3-4732-b1e9-b759a6064c66" 00:09:59.246 ], 00:09:59.246 "product_name": "Raid Volume", 00:09:59.246 "block_size": 512, 00:09:59.246 "num_blocks": 63488, 00:09:59.246 "uuid": "c652b4ad-d6c3-4732-b1e9-b759a6064c66", 00:09:59.246 "assigned_rate_limits": { 00:09:59.246 "rw_ios_per_sec": 0, 00:09:59.246 "rw_mbytes_per_sec": 0, 00:09:59.246 "r_mbytes_per_sec": 0, 00:09:59.246 "w_mbytes_per_sec": 0 00:09:59.246 }, 00:09:59.246 "claimed": false, 00:09:59.246 "zoned": false, 00:09:59.246 "supported_io_types": { 00:09:59.246 "read": true, 00:09:59.246 "write": true, 00:09:59.246 "unmap": false, 00:09:59.246 "flush": false, 00:09:59.246 "reset": true, 00:09:59.246 "nvme_admin": false, 00:09:59.246 "nvme_io": false, 00:09:59.246 "nvme_io_md": false, 00:09:59.246 "write_zeroes": true, 00:09:59.246 "zcopy": false, 00:09:59.246 "get_zone_info": false, 00:09:59.246 "zone_management": false, 00:09:59.246 "zone_append": false, 00:09:59.246 "compare": false, 00:09:59.246 "compare_and_write": false, 00:09:59.246 "abort": false, 00:09:59.246 "seek_hole": false, 00:09:59.246 "seek_data": false, 00:09:59.246 "copy": false, 00:09:59.246 "nvme_iov_md": false 00:09:59.246 }, 00:09:59.246 "memory_domains": [ 00:09:59.246 { 00:09:59.246 "dma_device_id": "system", 00:09:59.246 "dma_device_type": 1 00:09:59.246 }, 00:09:59.246 { 00:09:59.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.246 "dma_device_type": 2 00:09:59.246 }, 00:09:59.246 { 00:09:59.246 "dma_device_id": "system", 00:09:59.246 "dma_device_type": 1 00:09:59.246 }, 00:09:59.246 { 00:09:59.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.246 "dma_device_type": 2 00:09:59.246 }, 00:09:59.246 { 00:09:59.246 "dma_device_id": "system", 00:09:59.246 "dma_device_type": 1 00:09:59.246 }, 00:09:59.246 { 00:09:59.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.246 "dma_device_type": 2 00:09:59.246 } 00:09:59.246 ], 00:09:59.246 "driver_specific": { 00:09:59.246 "raid": { 00:09:59.246 "uuid": "c652b4ad-d6c3-4732-b1e9-b759a6064c66", 00:09:59.246 "strip_size_kb": 0, 00:09:59.246 "state": "online", 00:09:59.246 "raid_level": "raid1", 00:09:59.246 "superblock": true, 00:09:59.246 "num_base_bdevs": 3, 00:09:59.246 "num_base_bdevs_discovered": 3, 00:09:59.246 "num_base_bdevs_operational": 3, 00:09:59.246 "base_bdevs_list": [ 00:09:59.246 { 00:09:59.246 "name": "BaseBdev1", 00:09:59.246 "uuid": "3d7cc15d-a2fc-4633-9a02-9e4d0442f464", 00:09:59.246 "is_configured": true, 00:09:59.246 "data_offset": 2048, 00:09:59.246 "data_size": 63488 00:09:59.246 }, 00:09:59.246 { 00:09:59.246 "name": "BaseBdev2", 00:09:59.246 "uuid": "a59876cb-5a69-48bf-beb8-bff2559e2da2", 00:09:59.246 "is_configured": true, 00:09:59.246 "data_offset": 2048, 00:09:59.246 "data_size": 63488 00:09:59.247 }, 00:09:59.247 { 00:09:59.247 "name": "BaseBdev3", 00:09:59.247 "uuid": "448f55e9-3340-4239-8677-02e398f9728e", 00:09:59.247 "is_configured": true, 00:09:59.247 "data_offset": 2048, 00:09:59.247 "data_size": 63488 00:09:59.247 } 00:09:59.247 ] 00:09:59.247 } 00:09:59.247 } 00:09:59.247 }' 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:59.247 BaseBdev2 00:09:59.247 BaseBdev3' 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.247 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.505 [2024-11-15 10:38:20.484367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.505 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.506 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.506 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.506 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.506 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.506 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.506 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.506 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.506 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.506 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.506 "name": "Existed_Raid", 00:09:59.506 "uuid": "c652b4ad-d6c3-4732-b1e9-b759a6064c66", 00:09:59.506 "strip_size_kb": 0, 00:09:59.506 "state": "online", 00:09:59.506 "raid_level": "raid1", 00:09:59.506 "superblock": true, 00:09:59.506 "num_base_bdevs": 3, 00:09:59.506 "num_base_bdevs_discovered": 2, 00:09:59.506 "num_base_bdevs_operational": 2, 00:09:59.506 "base_bdevs_list": [ 00:09:59.506 { 00:09:59.506 "name": null, 00:09:59.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.506 "is_configured": false, 00:09:59.506 "data_offset": 0, 00:09:59.506 "data_size": 63488 00:09:59.506 }, 00:09:59.506 { 00:09:59.506 "name": "BaseBdev2", 00:09:59.506 "uuid": "a59876cb-5a69-48bf-beb8-bff2559e2da2", 00:09:59.506 "is_configured": true, 00:09:59.506 "data_offset": 2048, 00:09:59.506 "data_size": 63488 00:09:59.506 }, 00:09:59.506 { 00:09:59.506 "name": "BaseBdev3", 00:09:59.506 "uuid": "448f55e9-3340-4239-8677-02e398f9728e", 00:09:59.506 "is_configured": true, 00:09:59.506 "data_offset": 2048, 00:09:59.506 "data_size": 63488 00:09:59.506 } 00:09:59.506 ] 00:09:59.506 }' 00:09:59.506 10:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.506 10:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.073 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:00.073 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.073 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.073 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:00.073 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.073 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.073 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.073 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.073 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.073 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:00.073 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.073 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.073 [2024-11-15 10:38:21.159189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.332 [2024-11-15 10:38:21.323537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:00.332 [2024-11-15 10:38:21.323666] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.332 [2024-11-15 10:38:21.408222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.332 [2024-11-15 10:38:21.408315] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.332 [2024-11-15 10:38:21.408336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.332 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.591 BaseBdev2 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.591 [ 00:10:00.591 { 00:10:00.591 "name": "BaseBdev2", 00:10:00.591 "aliases": [ 00:10:00.591 "5e05a1b0-f965-4bd2-8fe5-cd731972727a" 00:10:00.591 ], 00:10:00.591 "product_name": "Malloc disk", 00:10:00.591 "block_size": 512, 00:10:00.591 "num_blocks": 65536, 00:10:00.591 "uuid": "5e05a1b0-f965-4bd2-8fe5-cd731972727a", 00:10:00.591 "assigned_rate_limits": { 00:10:00.591 "rw_ios_per_sec": 0, 00:10:00.591 "rw_mbytes_per_sec": 0, 00:10:00.591 "r_mbytes_per_sec": 0, 00:10:00.591 "w_mbytes_per_sec": 0 00:10:00.591 }, 00:10:00.591 "claimed": false, 00:10:00.591 "zoned": false, 00:10:00.591 "supported_io_types": { 00:10:00.591 "read": true, 00:10:00.591 "write": true, 00:10:00.591 "unmap": true, 00:10:00.591 "flush": true, 00:10:00.591 "reset": true, 00:10:00.591 "nvme_admin": false, 00:10:00.591 "nvme_io": false, 00:10:00.591 "nvme_io_md": false, 00:10:00.591 "write_zeroes": true, 00:10:00.591 "zcopy": true, 00:10:00.591 "get_zone_info": false, 00:10:00.591 "zone_management": false, 00:10:00.591 "zone_append": false, 00:10:00.591 "compare": false, 00:10:00.591 "compare_and_write": false, 00:10:00.591 "abort": true, 00:10:00.591 "seek_hole": false, 00:10:00.591 "seek_data": false, 00:10:00.591 "copy": true, 00:10:00.591 "nvme_iov_md": false 00:10:00.591 }, 00:10:00.591 "memory_domains": [ 00:10:00.591 { 00:10:00.591 "dma_device_id": "system", 00:10:00.591 "dma_device_type": 1 00:10:00.591 }, 00:10:00.591 { 00:10:00.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.591 "dma_device_type": 2 00:10:00.591 } 00:10:00.591 ], 00:10:00.591 "driver_specific": {} 00:10:00.591 } 00:10:00.591 ] 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.591 BaseBdev3 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.591 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.591 [ 00:10:00.591 { 00:10:00.591 "name": "BaseBdev3", 00:10:00.591 "aliases": [ 00:10:00.591 "7cfe23ab-0310-463e-b368-99423647846d" 00:10:00.591 ], 00:10:00.591 "product_name": "Malloc disk", 00:10:00.591 "block_size": 512, 00:10:00.591 "num_blocks": 65536, 00:10:00.591 "uuid": "7cfe23ab-0310-463e-b368-99423647846d", 00:10:00.591 "assigned_rate_limits": { 00:10:00.591 "rw_ios_per_sec": 0, 00:10:00.591 "rw_mbytes_per_sec": 0, 00:10:00.591 "r_mbytes_per_sec": 0, 00:10:00.591 "w_mbytes_per_sec": 0 00:10:00.591 }, 00:10:00.591 "claimed": false, 00:10:00.591 "zoned": false, 00:10:00.591 "supported_io_types": { 00:10:00.591 "read": true, 00:10:00.591 "write": true, 00:10:00.591 "unmap": true, 00:10:00.591 "flush": true, 00:10:00.591 "reset": true, 00:10:00.591 "nvme_admin": false, 00:10:00.591 "nvme_io": false, 00:10:00.591 "nvme_io_md": false, 00:10:00.591 "write_zeroes": true, 00:10:00.591 "zcopy": true, 00:10:00.591 "get_zone_info": false, 00:10:00.591 "zone_management": false, 00:10:00.591 "zone_append": false, 00:10:00.591 "compare": false, 00:10:00.591 "compare_and_write": false, 00:10:00.591 "abort": true, 00:10:00.591 "seek_hole": false, 00:10:00.591 "seek_data": false, 00:10:00.591 "copy": true, 00:10:00.591 "nvme_iov_md": false 00:10:00.591 }, 00:10:00.591 "memory_domains": [ 00:10:00.591 { 00:10:00.591 "dma_device_id": "system", 00:10:00.591 "dma_device_type": 1 00:10:00.591 }, 00:10:00.592 { 00:10:00.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.592 "dma_device_type": 2 00:10:00.592 } 00:10:00.592 ], 00:10:00.592 "driver_specific": {} 00:10:00.592 } 00:10:00.592 ] 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.592 [2024-11-15 10:38:21.620808] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.592 [2024-11-15 10:38:21.620869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.592 [2024-11-15 10:38:21.620898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.592 [2024-11-15 10:38:21.623308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.592 "name": "Existed_Raid", 00:10:00.592 "uuid": "71f14415-8886-4b7e-8eb6-baf79ab98a56", 00:10:00.592 "strip_size_kb": 0, 00:10:00.592 "state": "configuring", 00:10:00.592 "raid_level": "raid1", 00:10:00.592 "superblock": true, 00:10:00.592 "num_base_bdevs": 3, 00:10:00.592 "num_base_bdevs_discovered": 2, 00:10:00.592 "num_base_bdevs_operational": 3, 00:10:00.592 "base_bdevs_list": [ 00:10:00.592 { 00:10:00.592 "name": "BaseBdev1", 00:10:00.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.592 "is_configured": false, 00:10:00.592 "data_offset": 0, 00:10:00.592 "data_size": 0 00:10:00.592 }, 00:10:00.592 { 00:10:00.592 "name": "BaseBdev2", 00:10:00.592 "uuid": "5e05a1b0-f965-4bd2-8fe5-cd731972727a", 00:10:00.592 "is_configured": true, 00:10:00.592 "data_offset": 2048, 00:10:00.592 "data_size": 63488 00:10:00.592 }, 00:10:00.592 { 00:10:00.592 "name": "BaseBdev3", 00:10:00.592 "uuid": "7cfe23ab-0310-463e-b368-99423647846d", 00:10:00.592 "is_configured": true, 00:10:00.592 "data_offset": 2048, 00:10:00.592 "data_size": 63488 00:10:00.592 } 00:10:00.592 ] 00:10:00.592 }' 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.592 10:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.160 [2024-11-15 10:38:22.144984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.160 "name": "Existed_Raid", 00:10:01.160 "uuid": "71f14415-8886-4b7e-8eb6-baf79ab98a56", 00:10:01.160 "strip_size_kb": 0, 00:10:01.160 "state": "configuring", 00:10:01.160 "raid_level": "raid1", 00:10:01.160 "superblock": true, 00:10:01.160 "num_base_bdevs": 3, 00:10:01.160 "num_base_bdevs_discovered": 1, 00:10:01.160 "num_base_bdevs_operational": 3, 00:10:01.160 "base_bdevs_list": [ 00:10:01.160 { 00:10:01.160 "name": "BaseBdev1", 00:10:01.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.160 "is_configured": false, 00:10:01.160 "data_offset": 0, 00:10:01.160 "data_size": 0 00:10:01.160 }, 00:10:01.160 { 00:10:01.160 "name": null, 00:10:01.160 "uuid": "5e05a1b0-f965-4bd2-8fe5-cd731972727a", 00:10:01.160 "is_configured": false, 00:10:01.160 "data_offset": 0, 00:10:01.160 "data_size": 63488 00:10:01.160 }, 00:10:01.160 { 00:10:01.160 "name": "BaseBdev3", 00:10:01.160 "uuid": "7cfe23ab-0310-463e-b368-99423647846d", 00:10:01.160 "is_configured": true, 00:10:01.160 "data_offset": 2048, 00:10:01.160 "data_size": 63488 00:10:01.160 } 00:10:01.160 ] 00:10:01.160 }' 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.160 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.727 [2024-11-15 10:38:22.747915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.727 BaseBdev1 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.727 [ 00:10:01.727 { 00:10:01.727 "name": "BaseBdev1", 00:10:01.727 "aliases": [ 00:10:01.727 "e0794810-3091-4a52-b93c-592636785f38" 00:10:01.727 ], 00:10:01.727 "product_name": "Malloc disk", 00:10:01.727 "block_size": 512, 00:10:01.727 "num_blocks": 65536, 00:10:01.727 "uuid": "e0794810-3091-4a52-b93c-592636785f38", 00:10:01.727 "assigned_rate_limits": { 00:10:01.727 "rw_ios_per_sec": 0, 00:10:01.727 "rw_mbytes_per_sec": 0, 00:10:01.727 "r_mbytes_per_sec": 0, 00:10:01.727 "w_mbytes_per_sec": 0 00:10:01.727 }, 00:10:01.727 "claimed": true, 00:10:01.727 "claim_type": "exclusive_write", 00:10:01.727 "zoned": false, 00:10:01.727 "supported_io_types": { 00:10:01.727 "read": true, 00:10:01.727 "write": true, 00:10:01.727 "unmap": true, 00:10:01.727 "flush": true, 00:10:01.727 "reset": true, 00:10:01.727 "nvme_admin": false, 00:10:01.727 "nvme_io": false, 00:10:01.727 "nvme_io_md": false, 00:10:01.727 "write_zeroes": true, 00:10:01.727 "zcopy": true, 00:10:01.727 "get_zone_info": false, 00:10:01.727 "zone_management": false, 00:10:01.727 "zone_append": false, 00:10:01.727 "compare": false, 00:10:01.727 "compare_and_write": false, 00:10:01.727 "abort": true, 00:10:01.727 "seek_hole": false, 00:10:01.727 "seek_data": false, 00:10:01.727 "copy": true, 00:10:01.727 "nvme_iov_md": false 00:10:01.727 }, 00:10:01.727 "memory_domains": [ 00:10:01.727 { 00:10:01.727 "dma_device_id": "system", 00:10:01.727 "dma_device_type": 1 00:10:01.727 }, 00:10:01.727 { 00:10:01.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.727 "dma_device_type": 2 00:10:01.727 } 00:10:01.727 ], 00:10:01.727 "driver_specific": {} 00:10:01.727 } 00:10:01.727 ] 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.727 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.728 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.728 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.728 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.728 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.728 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.728 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.728 "name": "Existed_Raid", 00:10:01.728 "uuid": "71f14415-8886-4b7e-8eb6-baf79ab98a56", 00:10:01.728 "strip_size_kb": 0, 00:10:01.728 "state": "configuring", 00:10:01.728 "raid_level": "raid1", 00:10:01.728 "superblock": true, 00:10:01.728 "num_base_bdevs": 3, 00:10:01.728 "num_base_bdevs_discovered": 2, 00:10:01.728 "num_base_bdevs_operational": 3, 00:10:01.728 "base_bdevs_list": [ 00:10:01.728 { 00:10:01.728 "name": "BaseBdev1", 00:10:01.728 "uuid": "e0794810-3091-4a52-b93c-592636785f38", 00:10:01.728 "is_configured": true, 00:10:01.728 "data_offset": 2048, 00:10:01.728 "data_size": 63488 00:10:01.728 }, 00:10:01.728 { 00:10:01.728 "name": null, 00:10:01.728 "uuid": "5e05a1b0-f965-4bd2-8fe5-cd731972727a", 00:10:01.728 "is_configured": false, 00:10:01.728 "data_offset": 0, 00:10:01.728 "data_size": 63488 00:10:01.728 }, 00:10:01.728 { 00:10:01.728 "name": "BaseBdev3", 00:10:01.728 "uuid": "7cfe23ab-0310-463e-b368-99423647846d", 00:10:01.728 "is_configured": true, 00:10:01.728 "data_offset": 2048, 00:10:01.728 "data_size": 63488 00:10:01.728 } 00:10:01.728 ] 00:10:01.728 }' 00:10:01.728 10:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.728 10:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.325 [2024-11-15 10:38:23.348135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.325 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.325 "name": "Existed_Raid", 00:10:02.325 "uuid": "71f14415-8886-4b7e-8eb6-baf79ab98a56", 00:10:02.325 "strip_size_kb": 0, 00:10:02.325 "state": "configuring", 00:10:02.325 "raid_level": "raid1", 00:10:02.325 "superblock": true, 00:10:02.325 "num_base_bdevs": 3, 00:10:02.325 "num_base_bdevs_discovered": 1, 00:10:02.325 "num_base_bdevs_operational": 3, 00:10:02.326 "base_bdevs_list": [ 00:10:02.326 { 00:10:02.326 "name": "BaseBdev1", 00:10:02.326 "uuid": "e0794810-3091-4a52-b93c-592636785f38", 00:10:02.326 "is_configured": true, 00:10:02.326 "data_offset": 2048, 00:10:02.326 "data_size": 63488 00:10:02.326 }, 00:10:02.326 { 00:10:02.326 "name": null, 00:10:02.326 "uuid": "5e05a1b0-f965-4bd2-8fe5-cd731972727a", 00:10:02.326 "is_configured": false, 00:10:02.326 "data_offset": 0, 00:10:02.326 "data_size": 63488 00:10:02.326 }, 00:10:02.326 { 00:10:02.326 "name": null, 00:10:02.326 "uuid": "7cfe23ab-0310-463e-b368-99423647846d", 00:10:02.326 "is_configured": false, 00:10:02.326 "data_offset": 0, 00:10:02.326 "data_size": 63488 00:10:02.326 } 00:10:02.326 ] 00:10:02.326 }' 00:10:02.326 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.326 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.893 [2024-11-15 10:38:23.916325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.893 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.893 "name": "Existed_Raid", 00:10:02.894 "uuid": "71f14415-8886-4b7e-8eb6-baf79ab98a56", 00:10:02.894 "strip_size_kb": 0, 00:10:02.894 "state": "configuring", 00:10:02.894 "raid_level": "raid1", 00:10:02.894 "superblock": true, 00:10:02.894 "num_base_bdevs": 3, 00:10:02.894 "num_base_bdevs_discovered": 2, 00:10:02.894 "num_base_bdevs_operational": 3, 00:10:02.894 "base_bdevs_list": [ 00:10:02.894 { 00:10:02.894 "name": "BaseBdev1", 00:10:02.894 "uuid": "e0794810-3091-4a52-b93c-592636785f38", 00:10:02.894 "is_configured": true, 00:10:02.894 "data_offset": 2048, 00:10:02.894 "data_size": 63488 00:10:02.894 }, 00:10:02.894 { 00:10:02.894 "name": null, 00:10:02.894 "uuid": "5e05a1b0-f965-4bd2-8fe5-cd731972727a", 00:10:02.894 "is_configured": false, 00:10:02.894 "data_offset": 0, 00:10:02.894 "data_size": 63488 00:10:02.894 }, 00:10:02.894 { 00:10:02.894 "name": "BaseBdev3", 00:10:02.894 "uuid": "7cfe23ab-0310-463e-b368-99423647846d", 00:10:02.894 "is_configured": true, 00:10:02.894 "data_offset": 2048, 00:10:02.894 "data_size": 63488 00:10:02.894 } 00:10:02.894 ] 00:10:02.894 }' 00:10:02.894 10:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.894 10:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.460 [2024-11-15 10:38:24.524525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.460 10:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.716 10:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.716 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.716 "name": "Existed_Raid", 00:10:03.716 "uuid": "71f14415-8886-4b7e-8eb6-baf79ab98a56", 00:10:03.716 "strip_size_kb": 0, 00:10:03.716 "state": "configuring", 00:10:03.716 "raid_level": "raid1", 00:10:03.716 "superblock": true, 00:10:03.716 "num_base_bdevs": 3, 00:10:03.716 "num_base_bdevs_discovered": 1, 00:10:03.716 "num_base_bdevs_operational": 3, 00:10:03.716 "base_bdevs_list": [ 00:10:03.716 { 00:10:03.716 "name": null, 00:10:03.716 "uuid": "e0794810-3091-4a52-b93c-592636785f38", 00:10:03.716 "is_configured": false, 00:10:03.716 "data_offset": 0, 00:10:03.716 "data_size": 63488 00:10:03.716 }, 00:10:03.716 { 00:10:03.716 "name": null, 00:10:03.716 "uuid": "5e05a1b0-f965-4bd2-8fe5-cd731972727a", 00:10:03.716 "is_configured": false, 00:10:03.716 "data_offset": 0, 00:10:03.716 "data_size": 63488 00:10:03.716 }, 00:10:03.716 { 00:10:03.716 "name": "BaseBdev3", 00:10:03.716 "uuid": "7cfe23ab-0310-463e-b368-99423647846d", 00:10:03.716 "is_configured": true, 00:10:03.716 "data_offset": 2048, 00:10:03.716 "data_size": 63488 00:10:03.716 } 00:10:03.716 ] 00:10:03.716 }' 00:10:03.716 10:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.716 10:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.973 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.973 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:03.973 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.973 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.231 [2024-11-15 10:38:25.179420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.231 "name": "Existed_Raid", 00:10:04.231 "uuid": "71f14415-8886-4b7e-8eb6-baf79ab98a56", 00:10:04.231 "strip_size_kb": 0, 00:10:04.231 "state": "configuring", 00:10:04.231 "raid_level": "raid1", 00:10:04.231 "superblock": true, 00:10:04.231 "num_base_bdevs": 3, 00:10:04.231 "num_base_bdevs_discovered": 2, 00:10:04.231 "num_base_bdevs_operational": 3, 00:10:04.231 "base_bdevs_list": [ 00:10:04.231 { 00:10:04.231 "name": null, 00:10:04.231 "uuid": "e0794810-3091-4a52-b93c-592636785f38", 00:10:04.231 "is_configured": false, 00:10:04.231 "data_offset": 0, 00:10:04.231 "data_size": 63488 00:10:04.231 }, 00:10:04.231 { 00:10:04.231 "name": "BaseBdev2", 00:10:04.231 "uuid": "5e05a1b0-f965-4bd2-8fe5-cd731972727a", 00:10:04.231 "is_configured": true, 00:10:04.231 "data_offset": 2048, 00:10:04.231 "data_size": 63488 00:10:04.231 }, 00:10:04.231 { 00:10:04.231 "name": "BaseBdev3", 00:10:04.231 "uuid": "7cfe23ab-0310-463e-b368-99423647846d", 00:10:04.231 "is_configured": true, 00:10:04.231 "data_offset": 2048, 00:10:04.231 "data_size": 63488 00:10:04.231 } 00:10:04.231 ] 00:10:04.231 }' 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.231 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.797 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.797 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.797 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.797 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:04.797 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e0794810-3091-4a52-b93c-592636785f38 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.798 [2024-11-15 10:38:25.830183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:04.798 [2024-11-15 10:38:25.830694] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:04.798 [2024-11-15 10:38:25.830721] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:04.798 NewBaseBdev 00:10:04.798 [2024-11-15 10:38:25.831045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:04.798 [2024-11-15 10:38:25.831242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:04.798 [2024-11-15 10:38:25.831273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:04.798 [2024-11-15 10:38:25.831437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.798 [ 00:10:04.798 { 00:10:04.798 "name": "NewBaseBdev", 00:10:04.798 "aliases": [ 00:10:04.798 "e0794810-3091-4a52-b93c-592636785f38" 00:10:04.798 ], 00:10:04.798 "product_name": "Malloc disk", 00:10:04.798 "block_size": 512, 00:10:04.798 "num_blocks": 65536, 00:10:04.798 "uuid": "e0794810-3091-4a52-b93c-592636785f38", 00:10:04.798 "assigned_rate_limits": { 00:10:04.798 "rw_ios_per_sec": 0, 00:10:04.798 "rw_mbytes_per_sec": 0, 00:10:04.798 "r_mbytes_per_sec": 0, 00:10:04.798 "w_mbytes_per_sec": 0 00:10:04.798 }, 00:10:04.798 "claimed": true, 00:10:04.798 "claim_type": "exclusive_write", 00:10:04.798 "zoned": false, 00:10:04.798 "supported_io_types": { 00:10:04.798 "read": true, 00:10:04.798 "write": true, 00:10:04.798 "unmap": true, 00:10:04.798 "flush": true, 00:10:04.798 "reset": true, 00:10:04.798 "nvme_admin": false, 00:10:04.798 "nvme_io": false, 00:10:04.798 "nvme_io_md": false, 00:10:04.798 "write_zeroes": true, 00:10:04.798 "zcopy": true, 00:10:04.798 "get_zone_info": false, 00:10:04.798 "zone_management": false, 00:10:04.798 "zone_append": false, 00:10:04.798 "compare": false, 00:10:04.798 "compare_and_write": false, 00:10:04.798 "abort": true, 00:10:04.798 "seek_hole": false, 00:10:04.798 "seek_data": false, 00:10:04.798 "copy": true, 00:10:04.798 "nvme_iov_md": false 00:10:04.798 }, 00:10:04.798 "memory_domains": [ 00:10:04.798 { 00:10:04.798 "dma_device_id": "system", 00:10:04.798 "dma_device_type": 1 00:10:04.798 }, 00:10:04.798 { 00:10:04.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.798 "dma_device_type": 2 00:10:04.798 } 00:10:04.798 ], 00:10:04.798 "driver_specific": {} 00:10:04.798 } 00:10:04.798 ] 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.798 "name": "Existed_Raid", 00:10:04.798 "uuid": "71f14415-8886-4b7e-8eb6-baf79ab98a56", 00:10:04.798 "strip_size_kb": 0, 00:10:04.798 "state": "online", 00:10:04.798 "raid_level": "raid1", 00:10:04.798 "superblock": true, 00:10:04.798 "num_base_bdevs": 3, 00:10:04.798 "num_base_bdevs_discovered": 3, 00:10:04.798 "num_base_bdevs_operational": 3, 00:10:04.798 "base_bdevs_list": [ 00:10:04.798 { 00:10:04.798 "name": "NewBaseBdev", 00:10:04.798 "uuid": "e0794810-3091-4a52-b93c-592636785f38", 00:10:04.798 "is_configured": true, 00:10:04.798 "data_offset": 2048, 00:10:04.798 "data_size": 63488 00:10:04.798 }, 00:10:04.798 { 00:10:04.798 "name": "BaseBdev2", 00:10:04.798 "uuid": "5e05a1b0-f965-4bd2-8fe5-cd731972727a", 00:10:04.798 "is_configured": true, 00:10:04.798 "data_offset": 2048, 00:10:04.798 "data_size": 63488 00:10:04.798 }, 00:10:04.798 { 00:10:04.798 "name": "BaseBdev3", 00:10:04.798 "uuid": "7cfe23ab-0310-463e-b368-99423647846d", 00:10:04.798 "is_configured": true, 00:10:04.798 "data_offset": 2048, 00:10:04.798 "data_size": 63488 00:10:04.798 } 00:10:04.798 ] 00:10:04.798 }' 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.798 10:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.364 [2024-11-15 10:38:26.346774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.364 "name": "Existed_Raid", 00:10:05.364 "aliases": [ 00:10:05.364 "71f14415-8886-4b7e-8eb6-baf79ab98a56" 00:10:05.364 ], 00:10:05.364 "product_name": "Raid Volume", 00:10:05.364 "block_size": 512, 00:10:05.364 "num_blocks": 63488, 00:10:05.364 "uuid": "71f14415-8886-4b7e-8eb6-baf79ab98a56", 00:10:05.364 "assigned_rate_limits": { 00:10:05.364 "rw_ios_per_sec": 0, 00:10:05.364 "rw_mbytes_per_sec": 0, 00:10:05.364 "r_mbytes_per_sec": 0, 00:10:05.364 "w_mbytes_per_sec": 0 00:10:05.364 }, 00:10:05.364 "claimed": false, 00:10:05.364 "zoned": false, 00:10:05.364 "supported_io_types": { 00:10:05.364 "read": true, 00:10:05.364 "write": true, 00:10:05.364 "unmap": false, 00:10:05.364 "flush": false, 00:10:05.364 "reset": true, 00:10:05.364 "nvme_admin": false, 00:10:05.364 "nvme_io": false, 00:10:05.364 "nvme_io_md": false, 00:10:05.364 "write_zeroes": true, 00:10:05.364 "zcopy": false, 00:10:05.364 "get_zone_info": false, 00:10:05.364 "zone_management": false, 00:10:05.364 "zone_append": false, 00:10:05.364 "compare": false, 00:10:05.364 "compare_and_write": false, 00:10:05.364 "abort": false, 00:10:05.364 "seek_hole": false, 00:10:05.364 "seek_data": false, 00:10:05.364 "copy": false, 00:10:05.364 "nvme_iov_md": false 00:10:05.364 }, 00:10:05.364 "memory_domains": [ 00:10:05.364 { 00:10:05.364 "dma_device_id": "system", 00:10:05.364 "dma_device_type": 1 00:10:05.364 }, 00:10:05.364 { 00:10:05.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.364 "dma_device_type": 2 00:10:05.364 }, 00:10:05.364 { 00:10:05.364 "dma_device_id": "system", 00:10:05.364 "dma_device_type": 1 00:10:05.364 }, 00:10:05.364 { 00:10:05.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.364 "dma_device_type": 2 00:10:05.364 }, 00:10:05.364 { 00:10:05.364 "dma_device_id": "system", 00:10:05.364 "dma_device_type": 1 00:10:05.364 }, 00:10:05.364 { 00:10:05.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.364 "dma_device_type": 2 00:10:05.364 } 00:10:05.364 ], 00:10:05.364 "driver_specific": { 00:10:05.364 "raid": { 00:10:05.364 "uuid": "71f14415-8886-4b7e-8eb6-baf79ab98a56", 00:10:05.364 "strip_size_kb": 0, 00:10:05.364 "state": "online", 00:10:05.364 "raid_level": "raid1", 00:10:05.364 "superblock": true, 00:10:05.364 "num_base_bdevs": 3, 00:10:05.364 "num_base_bdevs_discovered": 3, 00:10:05.364 "num_base_bdevs_operational": 3, 00:10:05.364 "base_bdevs_list": [ 00:10:05.364 { 00:10:05.364 "name": "NewBaseBdev", 00:10:05.364 "uuid": "e0794810-3091-4a52-b93c-592636785f38", 00:10:05.364 "is_configured": true, 00:10:05.364 "data_offset": 2048, 00:10:05.364 "data_size": 63488 00:10:05.364 }, 00:10:05.364 { 00:10:05.364 "name": "BaseBdev2", 00:10:05.364 "uuid": "5e05a1b0-f965-4bd2-8fe5-cd731972727a", 00:10:05.364 "is_configured": true, 00:10:05.364 "data_offset": 2048, 00:10:05.364 "data_size": 63488 00:10:05.364 }, 00:10:05.364 { 00:10:05.364 "name": "BaseBdev3", 00:10:05.364 "uuid": "7cfe23ab-0310-463e-b368-99423647846d", 00:10:05.364 "is_configured": true, 00:10:05.364 "data_offset": 2048, 00:10:05.364 "data_size": 63488 00:10:05.364 } 00:10:05.364 ] 00:10:05.364 } 00:10:05.364 } 00:10:05.364 }' 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:05.364 BaseBdev2 00:10:05.364 BaseBdev3' 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.364 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.623 [2024-11-15 10:38:26.682416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.623 [2024-11-15 10:38:26.682457] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.623 [2024-11-15 10:38:26.682556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.623 [2024-11-15 10:38:26.682930] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.623 [2024-11-15 10:38:26.682954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68060 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68060 ']' 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68060 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68060 00:10:05.623 killing process with pid 68060 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68060' 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68060 00:10:05.623 [2024-11-15 10:38:26.717019] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:05.623 10:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68060 00:10:05.939 [2024-11-15 10:38:26.982942] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:06.902 ************************************ 00:10:06.902 END TEST raid_state_function_test_sb 00:10:06.902 ************************************ 00:10:06.902 10:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:06.902 00:10:06.902 real 0m11.713s 00:10:06.902 user 0m19.490s 00:10:06.902 sys 0m1.574s 00:10:06.902 10:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.902 10:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.168 10:38:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:07.168 10:38:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:07.168 10:38:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.168 10:38:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.168 ************************************ 00:10:07.168 START TEST raid_superblock_test 00:10:07.168 ************************************ 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68697 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68697 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68697 ']' 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.168 10:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.168 [2024-11-15 10:38:28.186657] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:10:07.168 [2024-11-15 10:38:28.186842] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68697 ] 00:10:07.426 [2024-11-15 10:38:28.364234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.426 [2024-11-15 10:38:28.495305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.684 [2024-11-15 10:38:28.699076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.684 [2024-11-15 10:38:28.699153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.251 malloc1 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.251 [2024-11-15 10:38:29.187333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:08.251 [2024-11-15 10:38:29.187580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.251 [2024-11-15 10:38:29.187660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:08.251 [2024-11-15 10:38:29.187833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.251 [2024-11-15 10:38:29.190686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.251 [2024-11-15 10:38:29.190733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:08.251 pt1 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.251 malloc2 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.251 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.251 [2024-11-15 10:38:29.243641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:08.252 [2024-11-15 10:38:29.243710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.252 [2024-11-15 10:38:29.243742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:08.252 [2024-11-15 10:38:29.243756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.252 [2024-11-15 10:38:29.246546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.252 [2024-11-15 10:38:29.246592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:08.252 pt2 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.252 malloc3 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.252 [2024-11-15 10:38:29.311760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:08.252 [2024-11-15 10:38:29.311832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.252 [2024-11-15 10:38:29.311866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:08.252 [2024-11-15 10:38:29.311881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.252 [2024-11-15 10:38:29.314673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.252 [2024-11-15 10:38:29.314729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:08.252 pt3 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.252 [2024-11-15 10:38:29.319812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:08.252 [2024-11-15 10:38:29.322259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:08.252 [2024-11-15 10:38:29.322359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:08.252 [2024-11-15 10:38:29.322595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:08.252 [2024-11-15 10:38:29.322625] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:08.252 [2024-11-15 10:38:29.322943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:08.252 [2024-11-15 10:38:29.323166] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:08.252 [2024-11-15 10:38:29.323187] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:08.252 [2024-11-15 10:38:29.323373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.252 "name": "raid_bdev1", 00:10:08.252 "uuid": "defd4589-92c2-46a6-b5b6-3043316a31cb", 00:10:08.252 "strip_size_kb": 0, 00:10:08.252 "state": "online", 00:10:08.252 "raid_level": "raid1", 00:10:08.252 "superblock": true, 00:10:08.252 "num_base_bdevs": 3, 00:10:08.252 "num_base_bdevs_discovered": 3, 00:10:08.252 "num_base_bdevs_operational": 3, 00:10:08.252 "base_bdevs_list": [ 00:10:08.252 { 00:10:08.252 "name": "pt1", 00:10:08.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.252 "is_configured": true, 00:10:08.252 "data_offset": 2048, 00:10:08.252 "data_size": 63488 00:10:08.252 }, 00:10:08.252 { 00:10:08.252 "name": "pt2", 00:10:08.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.252 "is_configured": true, 00:10:08.252 "data_offset": 2048, 00:10:08.252 "data_size": 63488 00:10:08.252 }, 00:10:08.252 { 00:10:08.252 "name": "pt3", 00:10:08.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:08.252 "is_configured": true, 00:10:08.252 "data_offset": 2048, 00:10:08.252 "data_size": 63488 00:10:08.252 } 00:10:08.252 ] 00:10:08.252 }' 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.252 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.818 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:08.818 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:08.818 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.818 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.818 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.818 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.818 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:08.818 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.818 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.818 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.818 [2024-11-15 10:38:29.836309] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.818 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.818 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.818 "name": "raid_bdev1", 00:10:08.818 "aliases": [ 00:10:08.818 "defd4589-92c2-46a6-b5b6-3043316a31cb" 00:10:08.818 ], 00:10:08.818 "product_name": "Raid Volume", 00:10:08.818 "block_size": 512, 00:10:08.818 "num_blocks": 63488, 00:10:08.818 "uuid": "defd4589-92c2-46a6-b5b6-3043316a31cb", 00:10:08.818 "assigned_rate_limits": { 00:10:08.818 "rw_ios_per_sec": 0, 00:10:08.818 "rw_mbytes_per_sec": 0, 00:10:08.818 "r_mbytes_per_sec": 0, 00:10:08.818 "w_mbytes_per_sec": 0 00:10:08.818 }, 00:10:08.818 "claimed": false, 00:10:08.818 "zoned": false, 00:10:08.818 "supported_io_types": { 00:10:08.818 "read": true, 00:10:08.818 "write": true, 00:10:08.818 "unmap": false, 00:10:08.818 "flush": false, 00:10:08.818 "reset": true, 00:10:08.818 "nvme_admin": false, 00:10:08.818 "nvme_io": false, 00:10:08.818 "nvme_io_md": false, 00:10:08.818 "write_zeroes": true, 00:10:08.818 "zcopy": false, 00:10:08.818 "get_zone_info": false, 00:10:08.818 "zone_management": false, 00:10:08.818 "zone_append": false, 00:10:08.818 "compare": false, 00:10:08.818 "compare_and_write": false, 00:10:08.818 "abort": false, 00:10:08.818 "seek_hole": false, 00:10:08.818 "seek_data": false, 00:10:08.818 "copy": false, 00:10:08.818 "nvme_iov_md": false 00:10:08.818 }, 00:10:08.818 "memory_domains": [ 00:10:08.818 { 00:10:08.819 "dma_device_id": "system", 00:10:08.819 "dma_device_type": 1 00:10:08.819 }, 00:10:08.819 { 00:10:08.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.819 "dma_device_type": 2 00:10:08.819 }, 00:10:08.819 { 00:10:08.819 "dma_device_id": "system", 00:10:08.819 "dma_device_type": 1 00:10:08.819 }, 00:10:08.819 { 00:10:08.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.819 "dma_device_type": 2 00:10:08.819 }, 00:10:08.819 { 00:10:08.819 "dma_device_id": "system", 00:10:08.819 "dma_device_type": 1 00:10:08.819 }, 00:10:08.819 { 00:10:08.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.819 "dma_device_type": 2 00:10:08.819 } 00:10:08.819 ], 00:10:08.819 "driver_specific": { 00:10:08.819 "raid": { 00:10:08.819 "uuid": "defd4589-92c2-46a6-b5b6-3043316a31cb", 00:10:08.819 "strip_size_kb": 0, 00:10:08.819 "state": "online", 00:10:08.819 "raid_level": "raid1", 00:10:08.819 "superblock": true, 00:10:08.819 "num_base_bdevs": 3, 00:10:08.819 "num_base_bdevs_discovered": 3, 00:10:08.819 "num_base_bdevs_operational": 3, 00:10:08.819 "base_bdevs_list": [ 00:10:08.819 { 00:10:08.819 "name": "pt1", 00:10:08.819 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.819 "is_configured": true, 00:10:08.819 "data_offset": 2048, 00:10:08.819 "data_size": 63488 00:10:08.819 }, 00:10:08.819 { 00:10:08.819 "name": "pt2", 00:10:08.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.819 "is_configured": true, 00:10:08.819 "data_offset": 2048, 00:10:08.819 "data_size": 63488 00:10:08.819 }, 00:10:08.819 { 00:10:08.819 "name": "pt3", 00:10:08.819 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:08.819 "is_configured": true, 00:10:08.819 "data_offset": 2048, 00:10:08.819 "data_size": 63488 00:10:08.819 } 00:10:08.819 ] 00:10:08.819 } 00:10:08.819 } 00:10:08.819 }' 00:10:08.819 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.819 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:08.819 pt2 00:10:08.819 pt3' 00:10:08.819 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.077 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:09.077 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.077 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:09.077 10:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.077 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.078 10:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.078 [2024-11-15 10:38:30.160281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=defd4589-92c2-46a6-b5b6-3043316a31cb 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z defd4589-92c2-46a6-b5b6-3043316a31cb ']' 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.078 [2024-11-15 10:38:30.211981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:09.078 [2024-11-15 10:38:30.212014] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.078 [2024-11-15 10:38:30.212104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.078 [2024-11-15 10:38:30.212201] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.078 [2024-11-15 10:38:30.212217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.078 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.336 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.337 [2024-11-15 10:38:30.360050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:09.337 [2024-11-15 10:38:30.362591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:09.337 [2024-11-15 10:38:30.362677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:09.337 [2024-11-15 10:38:30.362758] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:09.337 [2024-11-15 10:38:30.362832] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:09.337 [2024-11-15 10:38:30.362867] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:09.337 [2024-11-15 10:38:30.362895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:09.337 [2024-11-15 10:38:30.362910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:09.337 request: 00:10:09.337 { 00:10:09.337 "name": "raid_bdev1", 00:10:09.337 "raid_level": "raid1", 00:10:09.337 "base_bdevs": [ 00:10:09.337 "malloc1", 00:10:09.337 "malloc2", 00:10:09.337 "malloc3" 00:10:09.337 ], 00:10:09.337 "superblock": false, 00:10:09.337 "method": "bdev_raid_create", 00:10:09.337 "req_id": 1 00:10:09.337 } 00:10:09.337 Got JSON-RPC error response 00:10:09.337 response: 00:10:09.337 { 00:10:09.337 "code": -17, 00:10:09.337 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:09.337 } 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.337 [2024-11-15 10:38:30.444050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:09.337 [2024-11-15 10:38:30.444251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.337 [2024-11-15 10:38:30.444297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:09.337 [2024-11-15 10:38:30.444313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.337 [2024-11-15 10:38:30.447187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.337 [2024-11-15 10:38:30.447235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:09.337 [2024-11-15 10:38:30.447335] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:09.337 [2024-11-15 10:38:30.447407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:09.337 pt1 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.337 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.594 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.594 "name": "raid_bdev1", 00:10:09.594 "uuid": "defd4589-92c2-46a6-b5b6-3043316a31cb", 00:10:09.594 "strip_size_kb": 0, 00:10:09.594 "state": "configuring", 00:10:09.594 "raid_level": "raid1", 00:10:09.594 "superblock": true, 00:10:09.594 "num_base_bdevs": 3, 00:10:09.594 "num_base_bdevs_discovered": 1, 00:10:09.594 "num_base_bdevs_operational": 3, 00:10:09.594 "base_bdevs_list": [ 00:10:09.594 { 00:10:09.594 "name": "pt1", 00:10:09.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.594 "is_configured": true, 00:10:09.594 "data_offset": 2048, 00:10:09.594 "data_size": 63488 00:10:09.594 }, 00:10:09.594 { 00:10:09.594 "name": null, 00:10:09.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.594 "is_configured": false, 00:10:09.594 "data_offset": 2048, 00:10:09.594 "data_size": 63488 00:10:09.594 }, 00:10:09.594 { 00:10:09.594 "name": null, 00:10:09.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:09.594 "is_configured": false, 00:10:09.594 "data_offset": 2048, 00:10:09.594 "data_size": 63488 00:10:09.594 } 00:10:09.594 ] 00:10:09.594 }' 00:10:09.594 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.594 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.851 [2024-11-15 10:38:30.968213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:09.851 [2024-11-15 10:38:30.968290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.851 [2024-11-15 10:38:30.968326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:09.851 [2024-11-15 10:38:30.968341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.851 [2024-11-15 10:38:30.968935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.851 [2024-11-15 10:38:30.968979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:09.851 [2024-11-15 10:38:30.969093] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:09.851 [2024-11-15 10:38:30.969126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:09.851 pt2 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.851 [2024-11-15 10:38:30.976192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.851 10:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.851 10:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.116 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.116 "name": "raid_bdev1", 00:10:10.116 "uuid": "defd4589-92c2-46a6-b5b6-3043316a31cb", 00:10:10.116 "strip_size_kb": 0, 00:10:10.116 "state": "configuring", 00:10:10.116 "raid_level": "raid1", 00:10:10.116 "superblock": true, 00:10:10.116 "num_base_bdevs": 3, 00:10:10.116 "num_base_bdevs_discovered": 1, 00:10:10.116 "num_base_bdevs_operational": 3, 00:10:10.116 "base_bdevs_list": [ 00:10:10.116 { 00:10:10.116 "name": "pt1", 00:10:10.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.116 "is_configured": true, 00:10:10.116 "data_offset": 2048, 00:10:10.116 "data_size": 63488 00:10:10.116 }, 00:10:10.116 { 00:10:10.116 "name": null, 00:10:10.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.116 "is_configured": false, 00:10:10.116 "data_offset": 0, 00:10:10.116 "data_size": 63488 00:10:10.116 }, 00:10:10.116 { 00:10:10.116 "name": null, 00:10:10.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.116 "is_configured": false, 00:10:10.116 "data_offset": 2048, 00:10:10.116 "data_size": 63488 00:10:10.116 } 00:10:10.116 ] 00:10:10.116 }' 00:10:10.116 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.116 10:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.390 [2024-11-15 10:38:31.484319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:10.390 [2024-11-15 10:38:31.484404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.390 [2024-11-15 10:38:31.484433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:10.390 [2024-11-15 10:38:31.484450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.390 [2024-11-15 10:38:31.485223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.390 [2024-11-15 10:38:31.485263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:10.390 [2024-11-15 10:38:31.485366] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:10.390 [2024-11-15 10:38:31.485421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:10.390 pt2 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.390 [2024-11-15 10:38:31.492294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:10.390 [2024-11-15 10:38:31.493553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.390 [2024-11-15 10:38:31.493598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:10.390 [2024-11-15 10:38:31.493620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.390 [2024-11-15 10:38:31.494084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.390 [2024-11-15 10:38:31.494130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:10.390 [2024-11-15 10:38:31.494212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:10.390 [2024-11-15 10:38:31.494245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:10.390 [2024-11-15 10:38:31.494398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:10.390 [2024-11-15 10:38:31.494422] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:10.390 [2024-11-15 10:38:31.494732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:10.390 [2024-11-15 10:38:31.494942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:10.390 [2024-11-15 10:38:31.494959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:10.390 [2024-11-15 10:38:31.495133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.390 pt3 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.390 10:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.648 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.648 "name": "raid_bdev1", 00:10:10.648 "uuid": "defd4589-92c2-46a6-b5b6-3043316a31cb", 00:10:10.648 "strip_size_kb": 0, 00:10:10.648 "state": "online", 00:10:10.648 "raid_level": "raid1", 00:10:10.648 "superblock": true, 00:10:10.648 "num_base_bdevs": 3, 00:10:10.648 "num_base_bdevs_discovered": 3, 00:10:10.648 "num_base_bdevs_operational": 3, 00:10:10.648 "base_bdevs_list": [ 00:10:10.648 { 00:10:10.648 "name": "pt1", 00:10:10.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.648 "is_configured": true, 00:10:10.648 "data_offset": 2048, 00:10:10.648 "data_size": 63488 00:10:10.648 }, 00:10:10.648 { 00:10:10.648 "name": "pt2", 00:10:10.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.648 "is_configured": true, 00:10:10.648 "data_offset": 2048, 00:10:10.648 "data_size": 63488 00:10:10.648 }, 00:10:10.648 { 00:10:10.648 "name": "pt3", 00:10:10.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.648 "is_configured": true, 00:10:10.648 "data_offset": 2048, 00:10:10.648 "data_size": 63488 00:10:10.648 } 00:10:10.648 ] 00:10:10.648 }' 00:10:10.648 10:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.648 10:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.906 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:10.906 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:10.906 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.906 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.906 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.906 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.906 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:10.906 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.906 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.906 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.906 [2024-11-15 10:38:32.048870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.164 "name": "raid_bdev1", 00:10:11.164 "aliases": [ 00:10:11.164 "defd4589-92c2-46a6-b5b6-3043316a31cb" 00:10:11.164 ], 00:10:11.164 "product_name": "Raid Volume", 00:10:11.164 "block_size": 512, 00:10:11.164 "num_blocks": 63488, 00:10:11.164 "uuid": "defd4589-92c2-46a6-b5b6-3043316a31cb", 00:10:11.164 "assigned_rate_limits": { 00:10:11.164 "rw_ios_per_sec": 0, 00:10:11.164 "rw_mbytes_per_sec": 0, 00:10:11.164 "r_mbytes_per_sec": 0, 00:10:11.164 "w_mbytes_per_sec": 0 00:10:11.164 }, 00:10:11.164 "claimed": false, 00:10:11.164 "zoned": false, 00:10:11.164 "supported_io_types": { 00:10:11.164 "read": true, 00:10:11.164 "write": true, 00:10:11.164 "unmap": false, 00:10:11.164 "flush": false, 00:10:11.164 "reset": true, 00:10:11.164 "nvme_admin": false, 00:10:11.164 "nvme_io": false, 00:10:11.164 "nvme_io_md": false, 00:10:11.164 "write_zeroes": true, 00:10:11.164 "zcopy": false, 00:10:11.164 "get_zone_info": false, 00:10:11.164 "zone_management": false, 00:10:11.164 "zone_append": false, 00:10:11.164 "compare": false, 00:10:11.164 "compare_and_write": false, 00:10:11.164 "abort": false, 00:10:11.164 "seek_hole": false, 00:10:11.164 "seek_data": false, 00:10:11.164 "copy": false, 00:10:11.164 "nvme_iov_md": false 00:10:11.164 }, 00:10:11.164 "memory_domains": [ 00:10:11.164 { 00:10:11.164 "dma_device_id": "system", 00:10:11.164 "dma_device_type": 1 00:10:11.164 }, 00:10:11.164 { 00:10:11.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.164 "dma_device_type": 2 00:10:11.164 }, 00:10:11.164 { 00:10:11.164 "dma_device_id": "system", 00:10:11.164 "dma_device_type": 1 00:10:11.164 }, 00:10:11.164 { 00:10:11.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.164 "dma_device_type": 2 00:10:11.164 }, 00:10:11.164 { 00:10:11.164 "dma_device_id": "system", 00:10:11.164 "dma_device_type": 1 00:10:11.164 }, 00:10:11.164 { 00:10:11.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.164 "dma_device_type": 2 00:10:11.164 } 00:10:11.164 ], 00:10:11.164 "driver_specific": { 00:10:11.164 "raid": { 00:10:11.164 "uuid": "defd4589-92c2-46a6-b5b6-3043316a31cb", 00:10:11.164 "strip_size_kb": 0, 00:10:11.164 "state": "online", 00:10:11.164 "raid_level": "raid1", 00:10:11.164 "superblock": true, 00:10:11.164 "num_base_bdevs": 3, 00:10:11.164 "num_base_bdevs_discovered": 3, 00:10:11.164 "num_base_bdevs_operational": 3, 00:10:11.164 "base_bdevs_list": [ 00:10:11.164 { 00:10:11.164 "name": "pt1", 00:10:11.164 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.164 "is_configured": true, 00:10:11.164 "data_offset": 2048, 00:10:11.164 "data_size": 63488 00:10:11.164 }, 00:10:11.164 { 00:10:11.164 "name": "pt2", 00:10:11.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.164 "is_configured": true, 00:10:11.164 "data_offset": 2048, 00:10:11.164 "data_size": 63488 00:10:11.164 }, 00:10:11.164 { 00:10:11.164 "name": "pt3", 00:10:11.164 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.164 "is_configured": true, 00:10:11.164 "data_offset": 2048, 00:10:11.164 "data_size": 63488 00:10:11.164 } 00:10:11.164 ] 00:10:11.164 } 00:10:11.164 } 00:10:11.164 }' 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:11.164 pt2 00:10:11.164 pt3' 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.164 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.422 [2024-11-15 10:38:32.352857] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' defd4589-92c2-46a6-b5b6-3043316a31cb '!=' defd4589-92c2-46a6-b5b6-3043316a31cb ']' 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.422 [2024-11-15 10:38:32.396576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.422 "name": "raid_bdev1", 00:10:11.422 "uuid": "defd4589-92c2-46a6-b5b6-3043316a31cb", 00:10:11.422 "strip_size_kb": 0, 00:10:11.422 "state": "online", 00:10:11.422 "raid_level": "raid1", 00:10:11.422 "superblock": true, 00:10:11.422 "num_base_bdevs": 3, 00:10:11.422 "num_base_bdevs_discovered": 2, 00:10:11.422 "num_base_bdevs_operational": 2, 00:10:11.422 "base_bdevs_list": [ 00:10:11.422 { 00:10:11.422 "name": null, 00:10:11.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.422 "is_configured": false, 00:10:11.422 "data_offset": 0, 00:10:11.422 "data_size": 63488 00:10:11.422 }, 00:10:11.422 { 00:10:11.422 "name": "pt2", 00:10:11.422 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.422 "is_configured": true, 00:10:11.422 "data_offset": 2048, 00:10:11.422 "data_size": 63488 00:10:11.422 }, 00:10:11.422 { 00:10:11.422 "name": "pt3", 00:10:11.422 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.422 "is_configured": true, 00:10:11.422 "data_offset": 2048, 00:10:11.422 "data_size": 63488 00:10:11.422 } 00:10:11.422 ] 00:10:11.422 }' 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.422 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.990 [2024-11-15 10:38:32.908712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.990 [2024-11-15 10:38:32.908748] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.990 [2024-11-15 10:38:32.908845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.990 [2024-11-15 10:38:32.908925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.990 [2024-11-15 10:38:32.908949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.990 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.990 [2024-11-15 10:38:32.984670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.990 [2024-11-15 10:38:32.984740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.990 [2024-11-15 10:38:32.984767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:11.990 [2024-11-15 10:38:32.984784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.990 [2024-11-15 10:38:32.987663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.990 [2024-11-15 10:38:32.987713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.990 [2024-11-15 10:38:32.987809] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:11.990 [2024-11-15 10:38:32.987872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.990 pt2 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.991 10:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.991 10:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.991 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.991 "name": "raid_bdev1", 00:10:11.991 "uuid": "defd4589-92c2-46a6-b5b6-3043316a31cb", 00:10:11.991 "strip_size_kb": 0, 00:10:11.991 "state": "configuring", 00:10:11.991 "raid_level": "raid1", 00:10:11.991 "superblock": true, 00:10:11.991 "num_base_bdevs": 3, 00:10:11.991 "num_base_bdevs_discovered": 1, 00:10:11.991 "num_base_bdevs_operational": 2, 00:10:11.991 "base_bdevs_list": [ 00:10:11.991 { 00:10:11.991 "name": null, 00:10:11.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.991 "is_configured": false, 00:10:11.991 "data_offset": 2048, 00:10:11.991 "data_size": 63488 00:10:11.991 }, 00:10:11.991 { 00:10:11.991 "name": "pt2", 00:10:11.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.991 "is_configured": true, 00:10:11.991 "data_offset": 2048, 00:10:11.991 "data_size": 63488 00:10:11.991 }, 00:10:11.991 { 00:10:11.991 "name": null, 00:10:11.991 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.991 "is_configured": false, 00:10:11.991 "data_offset": 2048, 00:10:11.991 "data_size": 63488 00:10:11.991 } 00:10:11.991 ] 00:10:11.991 }' 00:10:11.991 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.991 10:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.557 [2024-11-15 10:38:33.500882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:12.557 [2024-11-15 10:38:33.500961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.557 [2024-11-15 10:38:33.500992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:12.557 [2024-11-15 10:38:33.501009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.557 [2024-11-15 10:38:33.501625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.557 [2024-11-15 10:38:33.501663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:12.557 [2024-11-15 10:38:33.501789] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:12.557 [2024-11-15 10:38:33.501844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:12.557 [2024-11-15 10:38:33.501989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:12.557 [2024-11-15 10:38:33.502010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:12.557 [2024-11-15 10:38:33.502327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:12.557 [2024-11-15 10:38:33.502557] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:12.557 [2024-11-15 10:38:33.502574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:12.557 [2024-11-15 10:38:33.502757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.557 pt3 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.557 "name": "raid_bdev1", 00:10:12.557 "uuid": "defd4589-92c2-46a6-b5b6-3043316a31cb", 00:10:12.557 "strip_size_kb": 0, 00:10:12.557 "state": "online", 00:10:12.557 "raid_level": "raid1", 00:10:12.557 "superblock": true, 00:10:12.557 "num_base_bdevs": 3, 00:10:12.557 "num_base_bdevs_discovered": 2, 00:10:12.557 "num_base_bdevs_operational": 2, 00:10:12.557 "base_bdevs_list": [ 00:10:12.557 { 00:10:12.557 "name": null, 00:10:12.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.557 "is_configured": false, 00:10:12.557 "data_offset": 2048, 00:10:12.557 "data_size": 63488 00:10:12.557 }, 00:10:12.557 { 00:10:12.557 "name": "pt2", 00:10:12.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.557 "is_configured": true, 00:10:12.557 "data_offset": 2048, 00:10:12.557 "data_size": 63488 00:10:12.557 }, 00:10:12.557 { 00:10:12.557 "name": "pt3", 00:10:12.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.557 "is_configured": true, 00:10:12.557 "data_offset": 2048, 00:10:12.557 "data_size": 63488 00:10:12.557 } 00:10:12.557 ] 00:10:12.557 }' 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.557 10:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.124 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:13.124 10:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.124 10:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.124 [2024-11-15 10:38:33.984956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.124 [2024-11-15 10:38:33.984997] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.124 [2024-11-15 10:38:33.985095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.124 [2024-11-15 10:38:33.985180] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.124 [2024-11-15 10:38:33.985196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:13.124 10:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.124 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.124 10:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.124 10:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:13.124 10:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.124 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.124 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:13.124 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:13.124 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:13.124 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:13.124 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:13.124 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.124 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.124 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.124 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:13.124 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.124 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.124 [2024-11-15 10:38:34.052975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:13.124 [2024-11-15 10:38:34.053041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.125 [2024-11-15 10:38:34.053072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:13.125 [2024-11-15 10:38:34.053086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.125 [2024-11-15 10:38:34.055997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.125 [2024-11-15 10:38:34.056036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:13.125 [2024-11-15 10:38:34.056139] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:13.125 [2024-11-15 10:38:34.056196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:13.125 [2024-11-15 10:38:34.056358] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:13.125 [2024-11-15 10:38:34.056383] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.125 [2024-11-15 10:38:34.056407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:13.125 [2024-11-15 10:38:34.056484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:13.125 pt1 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.125 "name": "raid_bdev1", 00:10:13.125 "uuid": "defd4589-92c2-46a6-b5b6-3043316a31cb", 00:10:13.125 "strip_size_kb": 0, 00:10:13.125 "state": "configuring", 00:10:13.125 "raid_level": "raid1", 00:10:13.125 "superblock": true, 00:10:13.125 "num_base_bdevs": 3, 00:10:13.125 "num_base_bdevs_discovered": 1, 00:10:13.125 "num_base_bdevs_operational": 2, 00:10:13.125 "base_bdevs_list": [ 00:10:13.125 { 00:10:13.125 "name": null, 00:10:13.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.125 "is_configured": false, 00:10:13.125 "data_offset": 2048, 00:10:13.125 "data_size": 63488 00:10:13.125 }, 00:10:13.125 { 00:10:13.125 "name": "pt2", 00:10:13.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.125 "is_configured": true, 00:10:13.125 "data_offset": 2048, 00:10:13.125 "data_size": 63488 00:10:13.125 }, 00:10:13.125 { 00:10:13.125 "name": null, 00:10:13.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.125 "is_configured": false, 00:10:13.125 "data_offset": 2048, 00:10:13.125 "data_size": 63488 00:10:13.125 } 00:10:13.125 ] 00:10:13.125 }' 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.125 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.691 [2024-11-15 10:38:34.609157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:13.691 [2024-11-15 10:38:34.609243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.691 [2024-11-15 10:38:34.609276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:13.691 [2024-11-15 10:38:34.609291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.691 [2024-11-15 10:38:34.609867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.691 [2024-11-15 10:38:34.609899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:13.691 [2024-11-15 10:38:34.610005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:13.691 [2024-11-15 10:38:34.610065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:13.691 [2024-11-15 10:38:34.610229] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:13.691 [2024-11-15 10:38:34.610251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.691 [2024-11-15 10:38:34.610604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:13.691 [2024-11-15 10:38:34.610811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:13.691 [2024-11-15 10:38:34.610845] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:13.691 [2024-11-15 10:38:34.611014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.691 pt3 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.691 "name": "raid_bdev1", 00:10:13.691 "uuid": "defd4589-92c2-46a6-b5b6-3043316a31cb", 00:10:13.691 "strip_size_kb": 0, 00:10:13.691 "state": "online", 00:10:13.691 "raid_level": "raid1", 00:10:13.691 "superblock": true, 00:10:13.691 "num_base_bdevs": 3, 00:10:13.691 "num_base_bdevs_discovered": 2, 00:10:13.691 "num_base_bdevs_operational": 2, 00:10:13.691 "base_bdevs_list": [ 00:10:13.691 { 00:10:13.691 "name": null, 00:10:13.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.691 "is_configured": false, 00:10:13.691 "data_offset": 2048, 00:10:13.691 "data_size": 63488 00:10:13.691 }, 00:10:13.691 { 00:10:13.691 "name": "pt2", 00:10:13.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.691 "is_configured": true, 00:10:13.691 "data_offset": 2048, 00:10:13.691 "data_size": 63488 00:10:13.691 }, 00:10:13.691 { 00:10:13.691 "name": "pt3", 00:10:13.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.691 "is_configured": true, 00:10:13.691 "data_offset": 2048, 00:10:13.691 "data_size": 63488 00:10:13.691 } 00:10:13.691 ] 00:10:13.691 }' 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.691 10:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.949 10:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:13.949 10:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:13.949 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.949 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.207 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.207 10:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:14.207 10:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.208 [2024-11-15 10:38:35.165626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' defd4589-92c2-46a6-b5b6-3043316a31cb '!=' defd4589-92c2-46a6-b5b6-3043316a31cb ']' 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68697 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68697 ']' 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68697 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68697 00:10:14.208 killing process with pid 68697 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68697' 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68697 00:10:14.208 [2024-11-15 10:38:35.233993] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.208 10:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68697 00:10:14.208 [2024-11-15 10:38:35.234104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.208 [2024-11-15 10:38:35.234200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.208 [2024-11-15 10:38:35.234220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:14.542 [2024-11-15 10:38:35.504131] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.492 10:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:15.492 00:10:15.492 real 0m8.466s 00:10:15.492 user 0m13.887s 00:10:15.492 sys 0m1.172s 00:10:15.492 10:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.492 10:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.492 ************************************ 00:10:15.492 END TEST raid_superblock_test 00:10:15.492 ************************************ 00:10:15.492 10:38:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:15.492 10:38:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:15.492 10:38:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.493 10:38:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.493 ************************************ 00:10:15.493 START TEST raid_read_error_test 00:10:15.493 ************************************ 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CjYEVtbWkY 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69147 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69147 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69147 ']' 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.493 10:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.751 [2024-11-15 10:38:36.748412] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:10:15.751 [2024-11-15 10:38:36.748618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69147 ] 00:10:16.009 [2024-11-15 10:38:36.935810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.009 [2024-11-15 10:38:37.086900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.267 [2024-11-15 10:38:37.295279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.267 [2024-11-15 10:38:37.295358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.832 BaseBdev1_malloc 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.832 true 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.832 [2024-11-15 10:38:37.885146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:16.832 [2024-11-15 10:38:37.885222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.832 [2024-11-15 10:38:37.885258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:16.832 [2024-11-15 10:38:37.885277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.832 [2024-11-15 10:38:37.888246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.832 [2024-11-15 10:38:37.888313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:16.832 BaseBdev1 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.832 BaseBdev2_malloc 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.832 true 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.832 [2024-11-15 10:38:37.953936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:16.832 [2024-11-15 10:38:37.954014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.832 [2024-11-15 10:38:37.954045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:16.832 [2024-11-15 10:38:37.954064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.832 [2024-11-15 10:38:37.957039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.832 [2024-11-15 10:38:37.957104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:16.832 BaseBdev2 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.832 10:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.090 BaseBdev3_malloc 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.090 true 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.090 [2024-11-15 10:38:38.030648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:17.090 [2024-11-15 10:38:38.030716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.090 [2024-11-15 10:38:38.030744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:17.090 [2024-11-15 10:38:38.030764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.090 [2024-11-15 10:38:38.033558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.090 [2024-11-15 10:38:38.033608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:17.090 BaseBdev3 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.090 [2024-11-15 10:38:38.042741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.090 [2024-11-15 10:38:38.045124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.090 [2024-11-15 10:38:38.045237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.090 [2024-11-15 10:38:38.045547] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:17.090 [2024-11-15 10:38:38.045578] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.090 [2024-11-15 10:38:38.045896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:17.090 [2024-11-15 10:38:38.046139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:17.090 [2024-11-15 10:38:38.046170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:17.090 [2024-11-15 10:38:38.046356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.090 "name": "raid_bdev1", 00:10:17.090 "uuid": "c104a11c-d4f8-47a9-a5c0-062794ee1c23", 00:10:17.090 "strip_size_kb": 0, 00:10:17.090 "state": "online", 00:10:17.090 "raid_level": "raid1", 00:10:17.090 "superblock": true, 00:10:17.090 "num_base_bdevs": 3, 00:10:17.090 "num_base_bdevs_discovered": 3, 00:10:17.090 "num_base_bdevs_operational": 3, 00:10:17.090 "base_bdevs_list": [ 00:10:17.090 { 00:10:17.090 "name": "BaseBdev1", 00:10:17.090 "uuid": "d3243a3c-86e3-56ba-9dac-67915b4ed41b", 00:10:17.090 "is_configured": true, 00:10:17.090 "data_offset": 2048, 00:10:17.090 "data_size": 63488 00:10:17.090 }, 00:10:17.090 { 00:10:17.090 "name": "BaseBdev2", 00:10:17.090 "uuid": "cf14a714-e620-5a3f-85c3-1e350f5a1c18", 00:10:17.090 "is_configured": true, 00:10:17.090 "data_offset": 2048, 00:10:17.090 "data_size": 63488 00:10:17.090 }, 00:10:17.090 { 00:10:17.090 "name": "BaseBdev3", 00:10:17.090 "uuid": "569d2107-ff02-5b52-b699-9381b3641983", 00:10:17.090 "is_configured": true, 00:10:17.090 "data_offset": 2048, 00:10:17.090 "data_size": 63488 00:10:17.090 } 00:10:17.090 ] 00:10:17.090 }' 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.090 10:38:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.655 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:17.655 10:38:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:17.655 [2024-11-15 10:38:38.668314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.599 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.600 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.600 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.600 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.600 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.600 10:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.600 10:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.600 10:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.600 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.600 "name": "raid_bdev1", 00:10:18.600 "uuid": "c104a11c-d4f8-47a9-a5c0-062794ee1c23", 00:10:18.600 "strip_size_kb": 0, 00:10:18.600 "state": "online", 00:10:18.600 "raid_level": "raid1", 00:10:18.600 "superblock": true, 00:10:18.600 "num_base_bdevs": 3, 00:10:18.600 "num_base_bdevs_discovered": 3, 00:10:18.600 "num_base_bdevs_operational": 3, 00:10:18.600 "base_bdevs_list": [ 00:10:18.600 { 00:10:18.600 "name": "BaseBdev1", 00:10:18.600 "uuid": "d3243a3c-86e3-56ba-9dac-67915b4ed41b", 00:10:18.600 "is_configured": true, 00:10:18.600 "data_offset": 2048, 00:10:18.600 "data_size": 63488 00:10:18.600 }, 00:10:18.600 { 00:10:18.600 "name": "BaseBdev2", 00:10:18.600 "uuid": "cf14a714-e620-5a3f-85c3-1e350f5a1c18", 00:10:18.600 "is_configured": true, 00:10:18.600 "data_offset": 2048, 00:10:18.600 "data_size": 63488 00:10:18.600 }, 00:10:18.600 { 00:10:18.600 "name": "BaseBdev3", 00:10:18.600 "uuid": "569d2107-ff02-5b52-b699-9381b3641983", 00:10:18.600 "is_configured": true, 00:10:18.600 "data_offset": 2048, 00:10:18.600 "data_size": 63488 00:10:18.600 } 00:10:18.600 ] 00:10:18.600 }' 00:10:18.600 10:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.600 10:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.166 10:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.166 10:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.166 10:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.166 [2024-11-15 10:38:40.100246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.166 [2024-11-15 10:38:40.100284] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.166 [2024-11-15 10:38:40.103796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.166 [2024-11-15 10:38:40.103865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.166 [2024-11-15 10:38:40.104010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.166 [2024-11-15 10:38:40.104027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:19.166 { 00:10:19.166 "results": [ 00:10:19.166 { 00:10:19.166 "job": "raid_bdev1", 00:10:19.166 "core_mask": "0x1", 00:10:19.166 "workload": "randrw", 00:10:19.166 "percentage": 50, 00:10:19.166 "status": "finished", 00:10:19.166 "queue_depth": 1, 00:10:19.166 "io_size": 131072, 00:10:19.166 "runtime": 1.429614, 00:10:19.166 "iops": 9208.079943257411, 00:10:19.166 "mibps": 1151.0099929071764, 00:10:19.166 "io_failed": 0, 00:10:19.166 "io_timeout": 0, 00:10:19.166 "avg_latency_us": 104.40979199469628, 00:10:19.166 "min_latency_us": 42.35636363636364, 00:10:19.166 "max_latency_us": 2010.7636363636364 00:10:19.166 } 00:10:19.167 ], 00:10:19.167 "core_count": 1 00:10:19.167 } 00:10:19.167 10:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.167 10:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69147 00:10:19.167 10:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69147 ']' 00:10:19.167 10:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69147 00:10:19.167 10:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:19.167 10:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.167 10:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69147 00:10:19.167 10:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.167 10:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.167 killing process with pid 69147 00:10:19.167 10:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69147' 00:10:19.167 10:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69147 00:10:19.167 [2024-11-15 10:38:40.141323] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.167 10:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69147 00:10:19.424 [2024-11-15 10:38:40.357593] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.357 10:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CjYEVtbWkY 00:10:20.357 10:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:20.357 10:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:20.357 10:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:20.357 10:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:20.357 10:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.357 10:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:20.357 10:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:20.357 00:10:20.357 real 0m4.890s 00:10:20.357 user 0m6.096s 00:10:20.357 sys 0m0.637s 00:10:20.357 10:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.357 10:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.357 ************************************ 00:10:20.357 END TEST raid_read_error_test 00:10:20.357 ************************************ 00:10:20.615 10:38:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:20.615 10:38:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:20.615 10:38:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.615 10:38:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:20.615 ************************************ 00:10:20.615 START TEST raid_write_error_test 00:10:20.615 ************************************ 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:20.615 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:20.616 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pbSuDnvTRS 00:10:20.616 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69294 00:10:20.616 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69294 00:10:20.616 10:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:20.616 10:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69294 ']' 00:10:20.616 10:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.616 10:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.616 10:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.616 10:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.616 10:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.616 [2024-11-15 10:38:41.644291] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:10:20.616 [2024-11-15 10:38:41.645065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69294 ] 00:10:20.874 [2024-11-15 10:38:41.823229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.874 [2024-11-15 10:38:41.955379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.132 [2024-11-15 10:38:42.162684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.132 [2024-11-15 10:38:42.162733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.698 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.698 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:21.698 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.698 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:21.698 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.698 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.698 BaseBdev1_malloc 00:10:21.698 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.698 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:21.698 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.698 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.698 true 00:10:21.698 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.698 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:21.698 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.698 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.698 [2024-11-15 10:38:42.783006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:21.698 [2024-11-15 10:38:42.783071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.698 [2024-11-15 10:38:42.783101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:21.698 [2024-11-15 10:38:42.783120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.699 [2024-11-15 10:38:42.785974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.699 [2024-11-15 10:38:42.786022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:21.699 BaseBdev1 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.699 BaseBdev2_malloc 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.699 true 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.699 [2024-11-15 10:38:42.839964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:21.699 [2024-11-15 10:38:42.840029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.699 [2024-11-15 10:38:42.840054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:21.699 [2024-11-15 10:38:42.840081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.699 [2024-11-15 10:38:42.842974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.699 [2024-11-15 10:38:42.843021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:21.699 BaseBdev2 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.699 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.957 BaseBdev3_malloc 00:10:21.957 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.957 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:21.957 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.957 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.957 true 00:10:21.957 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.957 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:21.957 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.957 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.957 [2024-11-15 10:38:42.905283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:21.957 [2024-11-15 10:38:42.905346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.957 [2024-11-15 10:38:42.905372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:21.957 [2024-11-15 10:38:42.905390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.957 [2024-11-15 10:38:42.908189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.957 [2024-11-15 10:38:42.908237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:21.957 BaseBdev3 00:10:21.957 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.957 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:21.957 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.957 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.957 [2024-11-15 10:38:42.913369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.957 [2024-11-15 10:38:42.915820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.957 [2024-11-15 10:38:42.915940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.957 [2024-11-15 10:38:42.916235] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:21.957 [2024-11-15 10:38:42.916264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.957 [2024-11-15 10:38:42.916599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:21.957 [2024-11-15 10:38:42.916854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:21.957 [2024-11-15 10:38:42.916887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:21.957 [2024-11-15 10:38:42.917077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.957 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.958 "name": "raid_bdev1", 00:10:21.958 "uuid": "fd5ebab2-bde6-414d-a6f4-4c421cbeb68c", 00:10:21.958 "strip_size_kb": 0, 00:10:21.958 "state": "online", 00:10:21.958 "raid_level": "raid1", 00:10:21.958 "superblock": true, 00:10:21.958 "num_base_bdevs": 3, 00:10:21.958 "num_base_bdevs_discovered": 3, 00:10:21.958 "num_base_bdevs_operational": 3, 00:10:21.958 "base_bdevs_list": [ 00:10:21.958 { 00:10:21.958 "name": "BaseBdev1", 00:10:21.958 "uuid": "4c9b72ba-120a-5b0a-9a34-720fa572482f", 00:10:21.958 "is_configured": true, 00:10:21.958 "data_offset": 2048, 00:10:21.958 "data_size": 63488 00:10:21.958 }, 00:10:21.958 { 00:10:21.958 "name": "BaseBdev2", 00:10:21.958 "uuid": "4a656e42-268a-52d3-9f53-3c9bd5dfc950", 00:10:21.958 "is_configured": true, 00:10:21.958 "data_offset": 2048, 00:10:21.958 "data_size": 63488 00:10:21.958 }, 00:10:21.958 { 00:10:21.958 "name": "BaseBdev3", 00:10:21.958 "uuid": "aab5d389-507e-56bc-b3a7-295aa207306f", 00:10:21.958 "is_configured": true, 00:10:21.958 "data_offset": 2048, 00:10:21.958 "data_size": 63488 00:10:21.958 } 00:10:21.958 ] 00:10:21.958 }' 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.958 10:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.524 10:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:22.524 10:38:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:22.524 [2024-11-15 10:38:43.539603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:23.459 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:23.459 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.459 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.459 [2024-11-15 10:38:44.419983] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:23.459 [2024-11-15 10:38:44.420058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:23.459 [2024-11-15 10:38:44.420321] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.460 "name": "raid_bdev1", 00:10:23.460 "uuid": "fd5ebab2-bde6-414d-a6f4-4c421cbeb68c", 00:10:23.460 "strip_size_kb": 0, 00:10:23.460 "state": "online", 00:10:23.460 "raid_level": "raid1", 00:10:23.460 "superblock": true, 00:10:23.460 "num_base_bdevs": 3, 00:10:23.460 "num_base_bdevs_discovered": 2, 00:10:23.460 "num_base_bdevs_operational": 2, 00:10:23.460 "base_bdevs_list": [ 00:10:23.460 { 00:10:23.460 "name": null, 00:10:23.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.460 "is_configured": false, 00:10:23.460 "data_offset": 0, 00:10:23.460 "data_size": 63488 00:10:23.460 }, 00:10:23.460 { 00:10:23.460 "name": "BaseBdev2", 00:10:23.460 "uuid": "4a656e42-268a-52d3-9f53-3c9bd5dfc950", 00:10:23.460 "is_configured": true, 00:10:23.460 "data_offset": 2048, 00:10:23.460 "data_size": 63488 00:10:23.460 }, 00:10:23.460 { 00:10:23.460 "name": "BaseBdev3", 00:10:23.460 "uuid": "aab5d389-507e-56bc-b3a7-295aa207306f", 00:10:23.460 "is_configured": true, 00:10:23.460 "data_offset": 2048, 00:10:23.460 "data_size": 63488 00:10:23.460 } 00:10:23.460 ] 00:10:23.460 }' 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.460 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.023 [2024-11-15 10:38:44.945970] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.023 [2024-11-15 10:38:44.946018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.023 [2024-11-15 10:38:44.949472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.023 [2024-11-15 10:38:44.949570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.023 [2024-11-15 10:38:44.949686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.023 [2024-11-15 10:38:44.949708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69294 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69294 ']' 00:10:24.023 { 00:10:24.023 "results": [ 00:10:24.023 { 00:10:24.023 "job": "raid_bdev1", 00:10:24.023 "core_mask": "0x1", 00:10:24.023 "workload": "randrw", 00:10:24.023 "percentage": 50, 00:10:24.023 "status": "finished", 00:10:24.023 "queue_depth": 1, 00:10:24.023 "io_size": 131072, 00:10:24.023 "runtime": 1.403786, 00:10:24.023 "iops": 10433.21417936922, 00:10:24.023 "mibps": 1304.1517724211526, 00:10:24.023 "io_failed": 0, 00:10:24.023 "io_timeout": 0, 00:10:24.023 "avg_latency_us": 91.61003364244658, 00:10:24.023 "min_latency_us": 42.123636363636365, 00:10:24.023 "max_latency_us": 1839.4763636363637 00:10:24.023 } 00:10:24.023 ], 00:10:24.023 "core_count": 1 00:10:24.023 } 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69294 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69294 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.023 killing process with pid 69294 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69294' 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69294 00:10:24.023 [2024-11-15 10:38:44.984088] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.023 10:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69294 00:10:24.281 [2024-11-15 10:38:45.191475] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.215 10:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pbSuDnvTRS 00:10:25.215 10:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:25.215 10:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:25.215 10:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:25.215 10:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:25.215 10:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.215 10:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:25.215 10:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:25.215 00:10:25.215 real 0m4.757s 00:10:25.215 user 0m5.965s 00:10:25.215 sys 0m0.568s 00:10:25.215 10:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.215 10:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 ************************************ 00:10:25.215 END TEST raid_write_error_test 00:10:25.215 ************************************ 00:10:25.215 10:38:46 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:25.215 10:38:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:25.215 10:38:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:25.215 10:38:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.215 10:38:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.215 10:38:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 ************************************ 00:10:25.215 START TEST raid_state_function_test 00:10:25.215 ************************************ 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69432 00:10:25.215 Process raid pid: 69432 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69432' 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69432 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69432 ']' 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.215 10:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.473 [2024-11-15 10:38:46.466224] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:10:25.473 [2024-11-15 10:38:46.466403] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.730 [2024-11-15 10:38:46.652070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.730 [2024-11-15 10:38:46.785931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.987 [2024-11-15 10:38:46.990206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.987 [2024-11-15 10:38:46.990257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.552 [2024-11-15 10:38:47.464448] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.552 [2024-11-15 10:38:47.464520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.552 [2024-11-15 10:38:47.464538] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.552 [2024-11-15 10:38:47.464554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.552 [2024-11-15 10:38:47.464564] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.552 [2024-11-15 10:38:47.464580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.552 [2024-11-15 10:38:47.464590] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.552 [2024-11-15 10:38:47.464604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.552 "name": "Existed_Raid", 00:10:26.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.552 "strip_size_kb": 64, 00:10:26.552 "state": "configuring", 00:10:26.552 "raid_level": "raid0", 00:10:26.552 "superblock": false, 00:10:26.552 "num_base_bdevs": 4, 00:10:26.552 "num_base_bdevs_discovered": 0, 00:10:26.552 "num_base_bdevs_operational": 4, 00:10:26.552 "base_bdevs_list": [ 00:10:26.552 { 00:10:26.552 "name": "BaseBdev1", 00:10:26.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.552 "is_configured": false, 00:10:26.552 "data_offset": 0, 00:10:26.552 "data_size": 0 00:10:26.552 }, 00:10:26.552 { 00:10:26.552 "name": "BaseBdev2", 00:10:26.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.552 "is_configured": false, 00:10:26.552 "data_offset": 0, 00:10:26.552 "data_size": 0 00:10:26.552 }, 00:10:26.552 { 00:10:26.552 "name": "BaseBdev3", 00:10:26.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.552 "is_configured": false, 00:10:26.552 "data_offset": 0, 00:10:26.552 "data_size": 0 00:10:26.552 }, 00:10:26.552 { 00:10:26.552 "name": "BaseBdev4", 00:10:26.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.552 "is_configured": false, 00:10:26.552 "data_offset": 0, 00:10:26.552 "data_size": 0 00:10:26.552 } 00:10:26.552 ] 00:10:26.552 }' 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.552 10:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.121 [2024-11-15 10:38:48.008572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.121 [2024-11-15 10:38:48.008624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.121 [2024-11-15 10:38:48.016522] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.121 [2024-11-15 10:38:48.016578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.121 [2024-11-15 10:38:48.016592] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.121 [2024-11-15 10:38:48.016618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.121 [2024-11-15 10:38:48.016628] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.121 [2024-11-15 10:38:48.016643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.121 [2024-11-15 10:38:48.016653] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:27.121 [2024-11-15 10:38:48.016668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.121 [2024-11-15 10:38:48.061704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.121 BaseBdev1 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.121 [ 00:10:27.121 { 00:10:27.121 "name": "BaseBdev1", 00:10:27.121 "aliases": [ 00:10:27.121 "3875f897-7540-419e-8ba3-2ef81f20da4f" 00:10:27.121 ], 00:10:27.121 "product_name": "Malloc disk", 00:10:27.121 "block_size": 512, 00:10:27.121 "num_blocks": 65536, 00:10:27.121 "uuid": "3875f897-7540-419e-8ba3-2ef81f20da4f", 00:10:27.121 "assigned_rate_limits": { 00:10:27.121 "rw_ios_per_sec": 0, 00:10:27.121 "rw_mbytes_per_sec": 0, 00:10:27.121 "r_mbytes_per_sec": 0, 00:10:27.121 "w_mbytes_per_sec": 0 00:10:27.121 }, 00:10:27.121 "claimed": true, 00:10:27.121 "claim_type": "exclusive_write", 00:10:27.121 "zoned": false, 00:10:27.121 "supported_io_types": { 00:10:27.121 "read": true, 00:10:27.121 "write": true, 00:10:27.121 "unmap": true, 00:10:27.121 "flush": true, 00:10:27.121 "reset": true, 00:10:27.121 "nvme_admin": false, 00:10:27.121 "nvme_io": false, 00:10:27.121 "nvme_io_md": false, 00:10:27.121 "write_zeroes": true, 00:10:27.121 "zcopy": true, 00:10:27.121 "get_zone_info": false, 00:10:27.121 "zone_management": false, 00:10:27.121 "zone_append": false, 00:10:27.121 "compare": false, 00:10:27.121 "compare_and_write": false, 00:10:27.121 "abort": true, 00:10:27.121 "seek_hole": false, 00:10:27.121 "seek_data": false, 00:10:27.121 "copy": true, 00:10:27.121 "nvme_iov_md": false 00:10:27.121 }, 00:10:27.121 "memory_domains": [ 00:10:27.121 { 00:10:27.121 "dma_device_id": "system", 00:10:27.121 "dma_device_type": 1 00:10:27.121 }, 00:10:27.121 { 00:10:27.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.121 "dma_device_type": 2 00:10:27.121 } 00:10:27.121 ], 00:10:27.121 "driver_specific": {} 00:10:27.121 } 00:10:27.121 ] 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.121 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.121 "name": "Existed_Raid", 00:10:27.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.121 "strip_size_kb": 64, 00:10:27.121 "state": "configuring", 00:10:27.121 "raid_level": "raid0", 00:10:27.121 "superblock": false, 00:10:27.121 "num_base_bdevs": 4, 00:10:27.121 "num_base_bdevs_discovered": 1, 00:10:27.121 "num_base_bdevs_operational": 4, 00:10:27.121 "base_bdevs_list": [ 00:10:27.121 { 00:10:27.121 "name": "BaseBdev1", 00:10:27.121 "uuid": "3875f897-7540-419e-8ba3-2ef81f20da4f", 00:10:27.121 "is_configured": true, 00:10:27.121 "data_offset": 0, 00:10:27.121 "data_size": 65536 00:10:27.121 }, 00:10:27.121 { 00:10:27.121 "name": "BaseBdev2", 00:10:27.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.122 "is_configured": false, 00:10:27.122 "data_offset": 0, 00:10:27.122 "data_size": 0 00:10:27.122 }, 00:10:27.122 { 00:10:27.122 "name": "BaseBdev3", 00:10:27.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.122 "is_configured": false, 00:10:27.122 "data_offset": 0, 00:10:27.122 "data_size": 0 00:10:27.122 }, 00:10:27.122 { 00:10:27.122 "name": "BaseBdev4", 00:10:27.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.122 "is_configured": false, 00:10:27.122 "data_offset": 0, 00:10:27.122 "data_size": 0 00:10:27.122 } 00:10:27.122 ] 00:10:27.122 }' 00:10:27.122 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.122 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.688 [2024-11-15 10:38:48.581944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.688 [2024-11-15 10:38:48.582032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.688 [2024-11-15 10:38:48.589958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.688 [2024-11-15 10:38:48.592452] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.688 [2024-11-15 10:38:48.592532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.688 [2024-11-15 10:38:48.592550] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.688 [2024-11-15 10:38:48.592571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.688 [2024-11-15 10:38:48.592581] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:27.688 [2024-11-15 10:38:48.592595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.688 "name": "Existed_Raid", 00:10:27.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.688 "strip_size_kb": 64, 00:10:27.688 "state": "configuring", 00:10:27.688 "raid_level": "raid0", 00:10:27.688 "superblock": false, 00:10:27.688 "num_base_bdevs": 4, 00:10:27.688 "num_base_bdevs_discovered": 1, 00:10:27.688 "num_base_bdevs_operational": 4, 00:10:27.688 "base_bdevs_list": [ 00:10:27.688 { 00:10:27.688 "name": "BaseBdev1", 00:10:27.688 "uuid": "3875f897-7540-419e-8ba3-2ef81f20da4f", 00:10:27.688 "is_configured": true, 00:10:27.688 "data_offset": 0, 00:10:27.688 "data_size": 65536 00:10:27.688 }, 00:10:27.688 { 00:10:27.688 "name": "BaseBdev2", 00:10:27.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.688 "is_configured": false, 00:10:27.688 "data_offset": 0, 00:10:27.688 "data_size": 0 00:10:27.688 }, 00:10:27.688 { 00:10:27.688 "name": "BaseBdev3", 00:10:27.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.688 "is_configured": false, 00:10:27.688 "data_offset": 0, 00:10:27.688 "data_size": 0 00:10:27.688 }, 00:10:27.688 { 00:10:27.688 "name": "BaseBdev4", 00:10:27.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.688 "is_configured": false, 00:10:27.688 "data_offset": 0, 00:10:27.688 "data_size": 0 00:10:27.688 } 00:10:27.688 ] 00:10:27.688 }' 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.688 10:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.256 [2024-11-15 10:38:49.164667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.256 BaseBdev2 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.256 [ 00:10:28.256 { 00:10:28.256 "name": "BaseBdev2", 00:10:28.256 "aliases": [ 00:10:28.256 "8ea3932f-19c5-4e26-9096-44297c40bfbb" 00:10:28.256 ], 00:10:28.256 "product_name": "Malloc disk", 00:10:28.256 "block_size": 512, 00:10:28.256 "num_blocks": 65536, 00:10:28.256 "uuid": "8ea3932f-19c5-4e26-9096-44297c40bfbb", 00:10:28.256 "assigned_rate_limits": { 00:10:28.256 "rw_ios_per_sec": 0, 00:10:28.256 "rw_mbytes_per_sec": 0, 00:10:28.256 "r_mbytes_per_sec": 0, 00:10:28.256 "w_mbytes_per_sec": 0 00:10:28.256 }, 00:10:28.256 "claimed": true, 00:10:28.256 "claim_type": "exclusive_write", 00:10:28.256 "zoned": false, 00:10:28.256 "supported_io_types": { 00:10:28.256 "read": true, 00:10:28.256 "write": true, 00:10:28.256 "unmap": true, 00:10:28.256 "flush": true, 00:10:28.256 "reset": true, 00:10:28.256 "nvme_admin": false, 00:10:28.256 "nvme_io": false, 00:10:28.256 "nvme_io_md": false, 00:10:28.256 "write_zeroes": true, 00:10:28.256 "zcopy": true, 00:10:28.256 "get_zone_info": false, 00:10:28.256 "zone_management": false, 00:10:28.256 "zone_append": false, 00:10:28.256 "compare": false, 00:10:28.256 "compare_and_write": false, 00:10:28.256 "abort": true, 00:10:28.256 "seek_hole": false, 00:10:28.256 "seek_data": false, 00:10:28.256 "copy": true, 00:10:28.256 "nvme_iov_md": false 00:10:28.256 }, 00:10:28.256 "memory_domains": [ 00:10:28.256 { 00:10:28.256 "dma_device_id": "system", 00:10:28.256 "dma_device_type": 1 00:10:28.256 }, 00:10:28.256 { 00:10:28.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.256 "dma_device_type": 2 00:10:28.256 } 00:10:28.256 ], 00:10:28.256 "driver_specific": {} 00:10:28.256 } 00:10:28.256 ] 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.256 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.256 "name": "Existed_Raid", 00:10:28.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.256 "strip_size_kb": 64, 00:10:28.256 "state": "configuring", 00:10:28.256 "raid_level": "raid0", 00:10:28.256 "superblock": false, 00:10:28.256 "num_base_bdevs": 4, 00:10:28.256 "num_base_bdevs_discovered": 2, 00:10:28.256 "num_base_bdevs_operational": 4, 00:10:28.256 "base_bdevs_list": [ 00:10:28.256 { 00:10:28.256 "name": "BaseBdev1", 00:10:28.256 "uuid": "3875f897-7540-419e-8ba3-2ef81f20da4f", 00:10:28.256 "is_configured": true, 00:10:28.256 "data_offset": 0, 00:10:28.256 "data_size": 65536 00:10:28.256 }, 00:10:28.256 { 00:10:28.256 "name": "BaseBdev2", 00:10:28.256 "uuid": "8ea3932f-19c5-4e26-9096-44297c40bfbb", 00:10:28.257 "is_configured": true, 00:10:28.257 "data_offset": 0, 00:10:28.257 "data_size": 65536 00:10:28.257 }, 00:10:28.257 { 00:10:28.257 "name": "BaseBdev3", 00:10:28.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.257 "is_configured": false, 00:10:28.257 "data_offset": 0, 00:10:28.257 "data_size": 0 00:10:28.257 }, 00:10:28.257 { 00:10:28.257 "name": "BaseBdev4", 00:10:28.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.257 "is_configured": false, 00:10:28.257 "data_offset": 0, 00:10:28.257 "data_size": 0 00:10:28.257 } 00:10:28.257 ] 00:10:28.257 }' 00:10:28.257 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.257 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.823 [2024-11-15 10:38:49.735704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.823 BaseBdev3 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.823 [ 00:10:28.823 { 00:10:28.823 "name": "BaseBdev3", 00:10:28.823 "aliases": [ 00:10:28.823 "fb5aeba3-256d-40df-9681-17e9a4be3a97" 00:10:28.823 ], 00:10:28.823 "product_name": "Malloc disk", 00:10:28.823 "block_size": 512, 00:10:28.823 "num_blocks": 65536, 00:10:28.823 "uuid": "fb5aeba3-256d-40df-9681-17e9a4be3a97", 00:10:28.823 "assigned_rate_limits": { 00:10:28.823 "rw_ios_per_sec": 0, 00:10:28.823 "rw_mbytes_per_sec": 0, 00:10:28.823 "r_mbytes_per_sec": 0, 00:10:28.823 "w_mbytes_per_sec": 0 00:10:28.823 }, 00:10:28.823 "claimed": true, 00:10:28.823 "claim_type": "exclusive_write", 00:10:28.823 "zoned": false, 00:10:28.823 "supported_io_types": { 00:10:28.823 "read": true, 00:10:28.823 "write": true, 00:10:28.823 "unmap": true, 00:10:28.823 "flush": true, 00:10:28.823 "reset": true, 00:10:28.823 "nvme_admin": false, 00:10:28.823 "nvme_io": false, 00:10:28.823 "nvme_io_md": false, 00:10:28.823 "write_zeroes": true, 00:10:28.823 "zcopy": true, 00:10:28.823 "get_zone_info": false, 00:10:28.823 "zone_management": false, 00:10:28.823 "zone_append": false, 00:10:28.823 "compare": false, 00:10:28.823 "compare_and_write": false, 00:10:28.823 "abort": true, 00:10:28.823 "seek_hole": false, 00:10:28.823 "seek_data": false, 00:10:28.823 "copy": true, 00:10:28.823 "nvme_iov_md": false 00:10:28.823 }, 00:10:28.823 "memory_domains": [ 00:10:28.823 { 00:10:28.823 "dma_device_id": "system", 00:10:28.823 "dma_device_type": 1 00:10:28.823 }, 00:10:28.823 { 00:10:28.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.823 "dma_device_type": 2 00:10:28.823 } 00:10:28.823 ], 00:10:28.823 "driver_specific": {} 00:10:28.823 } 00:10:28.823 ] 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.823 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.824 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.824 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.824 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.824 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.824 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.824 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.824 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.824 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.824 "name": "Existed_Raid", 00:10:28.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.824 "strip_size_kb": 64, 00:10:28.824 "state": "configuring", 00:10:28.824 "raid_level": "raid0", 00:10:28.824 "superblock": false, 00:10:28.824 "num_base_bdevs": 4, 00:10:28.824 "num_base_bdevs_discovered": 3, 00:10:28.824 "num_base_bdevs_operational": 4, 00:10:28.824 "base_bdevs_list": [ 00:10:28.824 { 00:10:28.824 "name": "BaseBdev1", 00:10:28.824 "uuid": "3875f897-7540-419e-8ba3-2ef81f20da4f", 00:10:28.824 "is_configured": true, 00:10:28.824 "data_offset": 0, 00:10:28.824 "data_size": 65536 00:10:28.824 }, 00:10:28.824 { 00:10:28.824 "name": "BaseBdev2", 00:10:28.824 "uuid": "8ea3932f-19c5-4e26-9096-44297c40bfbb", 00:10:28.824 "is_configured": true, 00:10:28.824 "data_offset": 0, 00:10:28.824 "data_size": 65536 00:10:28.824 }, 00:10:28.824 { 00:10:28.824 "name": "BaseBdev3", 00:10:28.824 "uuid": "fb5aeba3-256d-40df-9681-17e9a4be3a97", 00:10:28.824 "is_configured": true, 00:10:28.824 "data_offset": 0, 00:10:28.824 "data_size": 65536 00:10:28.824 }, 00:10:28.824 { 00:10:28.824 "name": "BaseBdev4", 00:10:28.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.824 "is_configured": false, 00:10:28.824 "data_offset": 0, 00:10:28.824 "data_size": 0 00:10:28.824 } 00:10:28.824 ] 00:10:28.824 }' 00:10:28.824 10:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.824 10:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.389 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:29.389 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.389 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.389 [2024-11-15 10:38:50.298155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:29.389 [2024-11-15 10:38:50.298218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:29.389 [2024-11-15 10:38:50.298233] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:29.390 [2024-11-15 10:38:50.298609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:29.390 [2024-11-15 10:38:50.298842] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:29.390 [2024-11-15 10:38:50.298875] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:29.390 [2024-11-15 10:38:50.299187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.390 BaseBdev4 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.390 [ 00:10:29.390 { 00:10:29.390 "name": "BaseBdev4", 00:10:29.390 "aliases": [ 00:10:29.390 "c6ae6b41-9c0d-475e-94f3-2b9ac3fd9489" 00:10:29.390 ], 00:10:29.390 "product_name": "Malloc disk", 00:10:29.390 "block_size": 512, 00:10:29.390 "num_blocks": 65536, 00:10:29.390 "uuid": "c6ae6b41-9c0d-475e-94f3-2b9ac3fd9489", 00:10:29.390 "assigned_rate_limits": { 00:10:29.390 "rw_ios_per_sec": 0, 00:10:29.390 "rw_mbytes_per_sec": 0, 00:10:29.390 "r_mbytes_per_sec": 0, 00:10:29.390 "w_mbytes_per_sec": 0 00:10:29.390 }, 00:10:29.390 "claimed": true, 00:10:29.390 "claim_type": "exclusive_write", 00:10:29.390 "zoned": false, 00:10:29.390 "supported_io_types": { 00:10:29.390 "read": true, 00:10:29.390 "write": true, 00:10:29.390 "unmap": true, 00:10:29.390 "flush": true, 00:10:29.390 "reset": true, 00:10:29.390 "nvme_admin": false, 00:10:29.390 "nvme_io": false, 00:10:29.390 "nvme_io_md": false, 00:10:29.390 "write_zeroes": true, 00:10:29.390 "zcopy": true, 00:10:29.390 "get_zone_info": false, 00:10:29.390 "zone_management": false, 00:10:29.390 "zone_append": false, 00:10:29.390 "compare": false, 00:10:29.390 "compare_and_write": false, 00:10:29.390 "abort": true, 00:10:29.390 "seek_hole": false, 00:10:29.390 "seek_data": false, 00:10:29.390 "copy": true, 00:10:29.390 "nvme_iov_md": false 00:10:29.390 }, 00:10:29.390 "memory_domains": [ 00:10:29.390 { 00:10:29.390 "dma_device_id": "system", 00:10:29.390 "dma_device_type": 1 00:10:29.390 }, 00:10:29.390 { 00:10:29.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.390 "dma_device_type": 2 00:10:29.390 } 00:10:29.390 ], 00:10:29.390 "driver_specific": {} 00:10:29.390 } 00:10:29.390 ] 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.390 "name": "Existed_Raid", 00:10:29.390 "uuid": "11ff42d7-b1de-4816-8adf-70ed0144852f", 00:10:29.390 "strip_size_kb": 64, 00:10:29.390 "state": "online", 00:10:29.390 "raid_level": "raid0", 00:10:29.390 "superblock": false, 00:10:29.390 "num_base_bdevs": 4, 00:10:29.390 "num_base_bdevs_discovered": 4, 00:10:29.390 "num_base_bdevs_operational": 4, 00:10:29.390 "base_bdevs_list": [ 00:10:29.390 { 00:10:29.390 "name": "BaseBdev1", 00:10:29.390 "uuid": "3875f897-7540-419e-8ba3-2ef81f20da4f", 00:10:29.390 "is_configured": true, 00:10:29.390 "data_offset": 0, 00:10:29.390 "data_size": 65536 00:10:29.390 }, 00:10:29.390 { 00:10:29.390 "name": "BaseBdev2", 00:10:29.390 "uuid": "8ea3932f-19c5-4e26-9096-44297c40bfbb", 00:10:29.390 "is_configured": true, 00:10:29.390 "data_offset": 0, 00:10:29.390 "data_size": 65536 00:10:29.390 }, 00:10:29.390 { 00:10:29.390 "name": "BaseBdev3", 00:10:29.390 "uuid": "fb5aeba3-256d-40df-9681-17e9a4be3a97", 00:10:29.390 "is_configured": true, 00:10:29.390 "data_offset": 0, 00:10:29.390 "data_size": 65536 00:10:29.390 }, 00:10:29.390 { 00:10:29.390 "name": "BaseBdev4", 00:10:29.390 "uuid": "c6ae6b41-9c0d-475e-94f3-2b9ac3fd9489", 00:10:29.390 "is_configured": true, 00:10:29.390 "data_offset": 0, 00:10:29.390 "data_size": 65536 00:10:29.390 } 00:10:29.390 ] 00:10:29.390 }' 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.390 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.957 [2024-11-15 10:38:50.818805] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.957 "name": "Existed_Raid", 00:10:29.957 "aliases": [ 00:10:29.957 "11ff42d7-b1de-4816-8adf-70ed0144852f" 00:10:29.957 ], 00:10:29.957 "product_name": "Raid Volume", 00:10:29.957 "block_size": 512, 00:10:29.957 "num_blocks": 262144, 00:10:29.957 "uuid": "11ff42d7-b1de-4816-8adf-70ed0144852f", 00:10:29.957 "assigned_rate_limits": { 00:10:29.957 "rw_ios_per_sec": 0, 00:10:29.957 "rw_mbytes_per_sec": 0, 00:10:29.957 "r_mbytes_per_sec": 0, 00:10:29.957 "w_mbytes_per_sec": 0 00:10:29.957 }, 00:10:29.957 "claimed": false, 00:10:29.957 "zoned": false, 00:10:29.957 "supported_io_types": { 00:10:29.957 "read": true, 00:10:29.957 "write": true, 00:10:29.957 "unmap": true, 00:10:29.957 "flush": true, 00:10:29.957 "reset": true, 00:10:29.957 "nvme_admin": false, 00:10:29.957 "nvme_io": false, 00:10:29.957 "nvme_io_md": false, 00:10:29.957 "write_zeroes": true, 00:10:29.957 "zcopy": false, 00:10:29.957 "get_zone_info": false, 00:10:29.957 "zone_management": false, 00:10:29.957 "zone_append": false, 00:10:29.957 "compare": false, 00:10:29.957 "compare_and_write": false, 00:10:29.957 "abort": false, 00:10:29.957 "seek_hole": false, 00:10:29.957 "seek_data": false, 00:10:29.957 "copy": false, 00:10:29.957 "nvme_iov_md": false 00:10:29.957 }, 00:10:29.957 "memory_domains": [ 00:10:29.957 { 00:10:29.957 "dma_device_id": "system", 00:10:29.957 "dma_device_type": 1 00:10:29.957 }, 00:10:29.957 { 00:10:29.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.957 "dma_device_type": 2 00:10:29.957 }, 00:10:29.957 { 00:10:29.957 "dma_device_id": "system", 00:10:29.957 "dma_device_type": 1 00:10:29.957 }, 00:10:29.957 { 00:10:29.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.957 "dma_device_type": 2 00:10:29.957 }, 00:10:29.957 { 00:10:29.957 "dma_device_id": "system", 00:10:29.957 "dma_device_type": 1 00:10:29.957 }, 00:10:29.957 { 00:10:29.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.957 "dma_device_type": 2 00:10:29.957 }, 00:10:29.957 { 00:10:29.957 "dma_device_id": "system", 00:10:29.957 "dma_device_type": 1 00:10:29.957 }, 00:10:29.957 { 00:10:29.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.957 "dma_device_type": 2 00:10:29.957 } 00:10:29.957 ], 00:10:29.957 "driver_specific": { 00:10:29.957 "raid": { 00:10:29.957 "uuid": "11ff42d7-b1de-4816-8adf-70ed0144852f", 00:10:29.957 "strip_size_kb": 64, 00:10:29.957 "state": "online", 00:10:29.957 "raid_level": "raid0", 00:10:29.957 "superblock": false, 00:10:29.957 "num_base_bdevs": 4, 00:10:29.957 "num_base_bdevs_discovered": 4, 00:10:29.957 "num_base_bdevs_operational": 4, 00:10:29.957 "base_bdevs_list": [ 00:10:29.957 { 00:10:29.957 "name": "BaseBdev1", 00:10:29.957 "uuid": "3875f897-7540-419e-8ba3-2ef81f20da4f", 00:10:29.957 "is_configured": true, 00:10:29.957 "data_offset": 0, 00:10:29.957 "data_size": 65536 00:10:29.957 }, 00:10:29.957 { 00:10:29.957 "name": "BaseBdev2", 00:10:29.957 "uuid": "8ea3932f-19c5-4e26-9096-44297c40bfbb", 00:10:29.957 "is_configured": true, 00:10:29.957 "data_offset": 0, 00:10:29.957 "data_size": 65536 00:10:29.957 }, 00:10:29.957 { 00:10:29.957 "name": "BaseBdev3", 00:10:29.957 "uuid": "fb5aeba3-256d-40df-9681-17e9a4be3a97", 00:10:29.957 "is_configured": true, 00:10:29.957 "data_offset": 0, 00:10:29.957 "data_size": 65536 00:10:29.957 }, 00:10:29.957 { 00:10:29.957 "name": "BaseBdev4", 00:10:29.957 "uuid": "c6ae6b41-9c0d-475e-94f3-2b9ac3fd9489", 00:10:29.957 "is_configured": true, 00:10:29.957 "data_offset": 0, 00:10:29.957 "data_size": 65536 00:10:29.957 } 00:10:29.957 ] 00:10:29.957 } 00:10:29.957 } 00:10:29.957 }' 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:29.957 BaseBdev2 00:10:29.957 BaseBdev3 00:10:29.957 BaseBdev4' 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.957 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.958 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:29.958 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.958 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.958 10:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.958 10:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.958 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.217 [2024-11-15 10:38:51.166504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:30.217 [2024-11-15 10:38:51.166544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.217 [2024-11-15 10:38:51.166612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.217 "name": "Existed_Raid", 00:10:30.217 "uuid": "11ff42d7-b1de-4816-8adf-70ed0144852f", 00:10:30.217 "strip_size_kb": 64, 00:10:30.217 "state": "offline", 00:10:30.217 "raid_level": "raid0", 00:10:30.217 "superblock": false, 00:10:30.217 "num_base_bdevs": 4, 00:10:30.217 "num_base_bdevs_discovered": 3, 00:10:30.217 "num_base_bdevs_operational": 3, 00:10:30.217 "base_bdevs_list": [ 00:10:30.217 { 00:10:30.217 "name": null, 00:10:30.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.217 "is_configured": false, 00:10:30.217 "data_offset": 0, 00:10:30.217 "data_size": 65536 00:10:30.217 }, 00:10:30.217 { 00:10:30.217 "name": "BaseBdev2", 00:10:30.217 "uuid": "8ea3932f-19c5-4e26-9096-44297c40bfbb", 00:10:30.217 "is_configured": true, 00:10:30.217 "data_offset": 0, 00:10:30.217 "data_size": 65536 00:10:30.217 }, 00:10:30.217 { 00:10:30.217 "name": "BaseBdev3", 00:10:30.217 "uuid": "fb5aeba3-256d-40df-9681-17e9a4be3a97", 00:10:30.217 "is_configured": true, 00:10:30.217 "data_offset": 0, 00:10:30.217 "data_size": 65536 00:10:30.217 }, 00:10:30.217 { 00:10:30.217 "name": "BaseBdev4", 00:10:30.217 "uuid": "c6ae6b41-9c0d-475e-94f3-2b9ac3fd9489", 00:10:30.217 "is_configured": true, 00:10:30.217 "data_offset": 0, 00:10:30.217 "data_size": 65536 00:10:30.217 } 00:10:30.217 ] 00:10:30.217 }' 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.217 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.784 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:30.784 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.784 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.785 [2024-11-15 10:38:51.827366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.785 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.042 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:31.042 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:31.042 10:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:31.042 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.042 10:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.042 [2024-11-15 10:38:51.971342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:31.042 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.042 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:31.042 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:31.042 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.042 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:31.042 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.042 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.042 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.042 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:31.042 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:31.042 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:31.042 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.042 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.042 [2024-11-15 10:38:52.116802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:31.042 [2024-11-15 10:38:52.116999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.301 BaseBdev2 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.301 [ 00:10:31.301 { 00:10:31.301 "name": "BaseBdev2", 00:10:31.301 "aliases": [ 00:10:31.301 "2f4e2cc5-9650-4c4e-9240-228e5e59a0fb" 00:10:31.301 ], 00:10:31.301 "product_name": "Malloc disk", 00:10:31.301 "block_size": 512, 00:10:31.301 "num_blocks": 65536, 00:10:31.301 "uuid": "2f4e2cc5-9650-4c4e-9240-228e5e59a0fb", 00:10:31.301 "assigned_rate_limits": { 00:10:31.301 "rw_ios_per_sec": 0, 00:10:31.301 "rw_mbytes_per_sec": 0, 00:10:31.301 "r_mbytes_per_sec": 0, 00:10:31.301 "w_mbytes_per_sec": 0 00:10:31.301 }, 00:10:31.301 "claimed": false, 00:10:31.301 "zoned": false, 00:10:31.301 "supported_io_types": { 00:10:31.301 "read": true, 00:10:31.301 "write": true, 00:10:31.301 "unmap": true, 00:10:31.301 "flush": true, 00:10:31.301 "reset": true, 00:10:31.301 "nvme_admin": false, 00:10:31.301 "nvme_io": false, 00:10:31.301 "nvme_io_md": false, 00:10:31.301 "write_zeroes": true, 00:10:31.301 "zcopy": true, 00:10:31.301 "get_zone_info": false, 00:10:31.301 "zone_management": false, 00:10:31.301 "zone_append": false, 00:10:31.301 "compare": false, 00:10:31.301 "compare_and_write": false, 00:10:31.301 "abort": true, 00:10:31.301 "seek_hole": false, 00:10:31.301 "seek_data": false, 00:10:31.301 "copy": true, 00:10:31.301 "nvme_iov_md": false 00:10:31.301 }, 00:10:31.301 "memory_domains": [ 00:10:31.301 { 00:10:31.301 "dma_device_id": "system", 00:10:31.301 "dma_device_type": 1 00:10:31.301 }, 00:10:31.301 { 00:10:31.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.301 "dma_device_type": 2 00:10:31.301 } 00:10:31.301 ], 00:10:31.301 "driver_specific": {} 00:10:31.301 } 00:10:31.301 ] 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.301 BaseBdev3 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.301 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 [ 00:10:31.302 { 00:10:31.302 "name": "BaseBdev3", 00:10:31.302 "aliases": [ 00:10:31.302 "9c570f74-77b7-481d-ac39-cbddaf8649e3" 00:10:31.302 ], 00:10:31.302 "product_name": "Malloc disk", 00:10:31.302 "block_size": 512, 00:10:31.302 "num_blocks": 65536, 00:10:31.302 "uuid": "9c570f74-77b7-481d-ac39-cbddaf8649e3", 00:10:31.302 "assigned_rate_limits": { 00:10:31.302 "rw_ios_per_sec": 0, 00:10:31.302 "rw_mbytes_per_sec": 0, 00:10:31.302 "r_mbytes_per_sec": 0, 00:10:31.302 "w_mbytes_per_sec": 0 00:10:31.302 }, 00:10:31.302 "claimed": false, 00:10:31.302 "zoned": false, 00:10:31.302 "supported_io_types": { 00:10:31.302 "read": true, 00:10:31.302 "write": true, 00:10:31.302 "unmap": true, 00:10:31.302 "flush": true, 00:10:31.302 "reset": true, 00:10:31.302 "nvme_admin": false, 00:10:31.302 "nvme_io": false, 00:10:31.302 "nvme_io_md": false, 00:10:31.302 "write_zeroes": true, 00:10:31.302 "zcopy": true, 00:10:31.302 "get_zone_info": false, 00:10:31.302 "zone_management": false, 00:10:31.302 "zone_append": false, 00:10:31.302 "compare": false, 00:10:31.302 "compare_and_write": false, 00:10:31.302 "abort": true, 00:10:31.302 "seek_hole": false, 00:10:31.302 "seek_data": false, 00:10:31.302 "copy": true, 00:10:31.302 "nvme_iov_md": false 00:10:31.302 }, 00:10:31.302 "memory_domains": [ 00:10:31.302 { 00:10:31.302 "dma_device_id": "system", 00:10:31.302 "dma_device_type": 1 00:10:31.302 }, 00:10:31.302 { 00:10:31.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.302 "dma_device_type": 2 00:10:31.302 } 00:10:31.302 ], 00:10:31.302 "driver_specific": {} 00:10:31.302 } 00:10:31.302 ] 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 BaseBdev4 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.302 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.302 [ 00:10:31.302 { 00:10:31.302 "name": "BaseBdev4", 00:10:31.302 "aliases": [ 00:10:31.302 "db8f57bc-b027-49ba-8bb7-eecd3231ee47" 00:10:31.302 ], 00:10:31.302 "product_name": "Malloc disk", 00:10:31.302 "block_size": 512, 00:10:31.302 "num_blocks": 65536, 00:10:31.302 "uuid": "db8f57bc-b027-49ba-8bb7-eecd3231ee47", 00:10:31.302 "assigned_rate_limits": { 00:10:31.302 "rw_ios_per_sec": 0, 00:10:31.302 "rw_mbytes_per_sec": 0, 00:10:31.302 "r_mbytes_per_sec": 0, 00:10:31.302 "w_mbytes_per_sec": 0 00:10:31.302 }, 00:10:31.302 "claimed": false, 00:10:31.302 "zoned": false, 00:10:31.302 "supported_io_types": { 00:10:31.302 "read": true, 00:10:31.302 "write": true, 00:10:31.302 "unmap": true, 00:10:31.302 "flush": true, 00:10:31.302 "reset": true, 00:10:31.302 "nvme_admin": false, 00:10:31.302 "nvme_io": false, 00:10:31.302 "nvme_io_md": false, 00:10:31.302 "write_zeroes": true, 00:10:31.302 "zcopy": true, 00:10:31.302 "get_zone_info": false, 00:10:31.560 "zone_management": false, 00:10:31.560 "zone_append": false, 00:10:31.560 "compare": false, 00:10:31.560 "compare_and_write": false, 00:10:31.560 "abort": true, 00:10:31.560 "seek_hole": false, 00:10:31.560 "seek_data": false, 00:10:31.560 "copy": true, 00:10:31.560 "nvme_iov_md": false 00:10:31.560 }, 00:10:31.560 "memory_domains": [ 00:10:31.560 { 00:10:31.560 "dma_device_id": "system", 00:10:31.560 "dma_device_type": 1 00:10:31.560 }, 00:10:31.560 { 00:10:31.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.560 "dma_device_type": 2 00:10:31.560 } 00:10:31.560 ], 00:10:31.560 "driver_specific": {} 00:10:31.560 } 00:10:31.560 ] 00:10:31.560 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.560 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:31.560 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:31.560 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.560 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.560 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.560 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.560 [2024-11-15 10:38:52.471467] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.560 [2024-11-15 10:38:52.471667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.560 [2024-11-15 10:38:52.471806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.560 [2024-11-15 10:38:52.474239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.561 [2024-11-15 10:38:52.474438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.561 "name": "Existed_Raid", 00:10:31.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.561 "strip_size_kb": 64, 00:10:31.561 "state": "configuring", 00:10:31.561 "raid_level": "raid0", 00:10:31.561 "superblock": false, 00:10:31.561 "num_base_bdevs": 4, 00:10:31.561 "num_base_bdevs_discovered": 3, 00:10:31.561 "num_base_bdevs_operational": 4, 00:10:31.561 "base_bdevs_list": [ 00:10:31.561 { 00:10:31.561 "name": "BaseBdev1", 00:10:31.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.561 "is_configured": false, 00:10:31.561 "data_offset": 0, 00:10:31.561 "data_size": 0 00:10:31.561 }, 00:10:31.561 { 00:10:31.561 "name": "BaseBdev2", 00:10:31.561 "uuid": "2f4e2cc5-9650-4c4e-9240-228e5e59a0fb", 00:10:31.561 "is_configured": true, 00:10:31.561 "data_offset": 0, 00:10:31.561 "data_size": 65536 00:10:31.561 }, 00:10:31.561 { 00:10:31.561 "name": "BaseBdev3", 00:10:31.561 "uuid": "9c570f74-77b7-481d-ac39-cbddaf8649e3", 00:10:31.561 "is_configured": true, 00:10:31.561 "data_offset": 0, 00:10:31.561 "data_size": 65536 00:10:31.561 }, 00:10:31.561 { 00:10:31.561 "name": "BaseBdev4", 00:10:31.561 "uuid": "db8f57bc-b027-49ba-8bb7-eecd3231ee47", 00:10:31.561 "is_configured": true, 00:10:31.561 "data_offset": 0, 00:10:31.561 "data_size": 65536 00:10:31.561 } 00:10:31.561 ] 00:10:31.561 }' 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.561 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.127 [2024-11-15 10:38:52.987661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.127 10:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.127 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.127 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.127 "name": "Existed_Raid", 00:10:32.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.127 "strip_size_kb": 64, 00:10:32.127 "state": "configuring", 00:10:32.127 "raid_level": "raid0", 00:10:32.127 "superblock": false, 00:10:32.127 "num_base_bdevs": 4, 00:10:32.127 "num_base_bdevs_discovered": 2, 00:10:32.127 "num_base_bdevs_operational": 4, 00:10:32.127 "base_bdevs_list": [ 00:10:32.127 { 00:10:32.127 "name": "BaseBdev1", 00:10:32.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.127 "is_configured": false, 00:10:32.127 "data_offset": 0, 00:10:32.127 "data_size": 0 00:10:32.127 }, 00:10:32.127 { 00:10:32.127 "name": null, 00:10:32.127 "uuid": "2f4e2cc5-9650-4c4e-9240-228e5e59a0fb", 00:10:32.127 "is_configured": false, 00:10:32.127 "data_offset": 0, 00:10:32.127 "data_size": 65536 00:10:32.127 }, 00:10:32.127 { 00:10:32.127 "name": "BaseBdev3", 00:10:32.127 "uuid": "9c570f74-77b7-481d-ac39-cbddaf8649e3", 00:10:32.127 "is_configured": true, 00:10:32.127 "data_offset": 0, 00:10:32.127 "data_size": 65536 00:10:32.127 }, 00:10:32.127 { 00:10:32.127 "name": "BaseBdev4", 00:10:32.127 "uuid": "db8f57bc-b027-49ba-8bb7-eecd3231ee47", 00:10:32.127 "is_configured": true, 00:10:32.127 "data_offset": 0, 00:10:32.127 "data_size": 65536 00:10:32.127 } 00:10:32.127 ] 00:10:32.127 }' 00:10:32.127 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.127 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.385 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.385 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.385 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.385 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:32.385 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.643 [2024-11-15 10:38:53.599677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.643 BaseBdev1 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.643 [ 00:10:32.643 { 00:10:32.643 "name": "BaseBdev1", 00:10:32.643 "aliases": [ 00:10:32.643 "e60e385b-dbf0-4ae7-91e0-4087aae5f315" 00:10:32.643 ], 00:10:32.643 "product_name": "Malloc disk", 00:10:32.643 "block_size": 512, 00:10:32.643 "num_blocks": 65536, 00:10:32.643 "uuid": "e60e385b-dbf0-4ae7-91e0-4087aae5f315", 00:10:32.643 "assigned_rate_limits": { 00:10:32.643 "rw_ios_per_sec": 0, 00:10:32.643 "rw_mbytes_per_sec": 0, 00:10:32.643 "r_mbytes_per_sec": 0, 00:10:32.643 "w_mbytes_per_sec": 0 00:10:32.643 }, 00:10:32.643 "claimed": true, 00:10:32.643 "claim_type": "exclusive_write", 00:10:32.643 "zoned": false, 00:10:32.643 "supported_io_types": { 00:10:32.643 "read": true, 00:10:32.643 "write": true, 00:10:32.643 "unmap": true, 00:10:32.643 "flush": true, 00:10:32.643 "reset": true, 00:10:32.643 "nvme_admin": false, 00:10:32.643 "nvme_io": false, 00:10:32.643 "nvme_io_md": false, 00:10:32.643 "write_zeroes": true, 00:10:32.643 "zcopy": true, 00:10:32.643 "get_zone_info": false, 00:10:32.643 "zone_management": false, 00:10:32.643 "zone_append": false, 00:10:32.643 "compare": false, 00:10:32.643 "compare_and_write": false, 00:10:32.643 "abort": true, 00:10:32.643 "seek_hole": false, 00:10:32.643 "seek_data": false, 00:10:32.643 "copy": true, 00:10:32.643 "nvme_iov_md": false 00:10:32.643 }, 00:10:32.643 "memory_domains": [ 00:10:32.643 { 00:10:32.643 "dma_device_id": "system", 00:10:32.643 "dma_device_type": 1 00:10:32.643 }, 00:10:32.643 { 00:10:32.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.643 "dma_device_type": 2 00:10:32.643 } 00:10:32.643 ], 00:10:32.643 "driver_specific": {} 00:10:32.643 } 00:10:32.643 ] 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.643 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.643 "name": "Existed_Raid", 00:10:32.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.643 "strip_size_kb": 64, 00:10:32.643 "state": "configuring", 00:10:32.643 "raid_level": "raid0", 00:10:32.643 "superblock": false, 00:10:32.643 "num_base_bdevs": 4, 00:10:32.643 "num_base_bdevs_discovered": 3, 00:10:32.643 "num_base_bdevs_operational": 4, 00:10:32.643 "base_bdevs_list": [ 00:10:32.643 { 00:10:32.643 "name": "BaseBdev1", 00:10:32.643 "uuid": "e60e385b-dbf0-4ae7-91e0-4087aae5f315", 00:10:32.643 "is_configured": true, 00:10:32.643 "data_offset": 0, 00:10:32.643 "data_size": 65536 00:10:32.643 }, 00:10:32.643 { 00:10:32.643 "name": null, 00:10:32.643 "uuid": "2f4e2cc5-9650-4c4e-9240-228e5e59a0fb", 00:10:32.644 "is_configured": false, 00:10:32.644 "data_offset": 0, 00:10:32.644 "data_size": 65536 00:10:32.644 }, 00:10:32.644 { 00:10:32.644 "name": "BaseBdev3", 00:10:32.644 "uuid": "9c570f74-77b7-481d-ac39-cbddaf8649e3", 00:10:32.644 "is_configured": true, 00:10:32.644 "data_offset": 0, 00:10:32.644 "data_size": 65536 00:10:32.644 }, 00:10:32.644 { 00:10:32.644 "name": "BaseBdev4", 00:10:32.644 "uuid": "db8f57bc-b027-49ba-8bb7-eecd3231ee47", 00:10:32.644 "is_configured": true, 00:10:32.644 "data_offset": 0, 00:10:32.644 "data_size": 65536 00:10:32.644 } 00:10:32.644 ] 00:10:32.644 }' 00:10:32.644 10:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.644 10:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.210 [2024-11-15 10:38:54.203941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.210 "name": "Existed_Raid", 00:10:33.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.210 "strip_size_kb": 64, 00:10:33.210 "state": "configuring", 00:10:33.210 "raid_level": "raid0", 00:10:33.210 "superblock": false, 00:10:33.210 "num_base_bdevs": 4, 00:10:33.210 "num_base_bdevs_discovered": 2, 00:10:33.210 "num_base_bdevs_operational": 4, 00:10:33.210 "base_bdevs_list": [ 00:10:33.210 { 00:10:33.210 "name": "BaseBdev1", 00:10:33.210 "uuid": "e60e385b-dbf0-4ae7-91e0-4087aae5f315", 00:10:33.210 "is_configured": true, 00:10:33.210 "data_offset": 0, 00:10:33.210 "data_size": 65536 00:10:33.210 }, 00:10:33.210 { 00:10:33.210 "name": null, 00:10:33.210 "uuid": "2f4e2cc5-9650-4c4e-9240-228e5e59a0fb", 00:10:33.210 "is_configured": false, 00:10:33.210 "data_offset": 0, 00:10:33.210 "data_size": 65536 00:10:33.210 }, 00:10:33.210 { 00:10:33.210 "name": null, 00:10:33.210 "uuid": "9c570f74-77b7-481d-ac39-cbddaf8649e3", 00:10:33.210 "is_configured": false, 00:10:33.210 "data_offset": 0, 00:10:33.210 "data_size": 65536 00:10:33.210 }, 00:10:33.210 { 00:10:33.210 "name": "BaseBdev4", 00:10:33.210 "uuid": "db8f57bc-b027-49ba-8bb7-eecd3231ee47", 00:10:33.210 "is_configured": true, 00:10:33.210 "data_offset": 0, 00:10:33.210 "data_size": 65536 00:10:33.210 } 00:10:33.210 ] 00:10:33.210 }' 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.210 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.787 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.787 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:33.787 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.787 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.787 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.787 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.788 [2024-11-15 10:38:54.784155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.788 "name": "Existed_Raid", 00:10:33.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.788 "strip_size_kb": 64, 00:10:33.788 "state": "configuring", 00:10:33.788 "raid_level": "raid0", 00:10:33.788 "superblock": false, 00:10:33.788 "num_base_bdevs": 4, 00:10:33.788 "num_base_bdevs_discovered": 3, 00:10:33.788 "num_base_bdevs_operational": 4, 00:10:33.788 "base_bdevs_list": [ 00:10:33.788 { 00:10:33.788 "name": "BaseBdev1", 00:10:33.788 "uuid": "e60e385b-dbf0-4ae7-91e0-4087aae5f315", 00:10:33.788 "is_configured": true, 00:10:33.788 "data_offset": 0, 00:10:33.788 "data_size": 65536 00:10:33.788 }, 00:10:33.788 { 00:10:33.788 "name": null, 00:10:33.788 "uuid": "2f4e2cc5-9650-4c4e-9240-228e5e59a0fb", 00:10:33.788 "is_configured": false, 00:10:33.788 "data_offset": 0, 00:10:33.788 "data_size": 65536 00:10:33.788 }, 00:10:33.788 { 00:10:33.788 "name": "BaseBdev3", 00:10:33.788 "uuid": "9c570f74-77b7-481d-ac39-cbddaf8649e3", 00:10:33.788 "is_configured": true, 00:10:33.788 "data_offset": 0, 00:10:33.788 "data_size": 65536 00:10:33.788 }, 00:10:33.788 { 00:10:33.788 "name": "BaseBdev4", 00:10:33.788 "uuid": "db8f57bc-b027-49ba-8bb7-eecd3231ee47", 00:10:33.788 "is_configured": true, 00:10:33.788 "data_offset": 0, 00:10:33.788 "data_size": 65536 00:10:33.788 } 00:10:33.788 ] 00:10:33.788 }' 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.788 10:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.354 [2024-11-15 10:38:55.356340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.354 "name": "Existed_Raid", 00:10:34.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.354 "strip_size_kb": 64, 00:10:34.354 "state": "configuring", 00:10:34.354 "raid_level": "raid0", 00:10:34.354 "superblock": false, 00:10:34.354 "num_base_bdevs": 4, 00:10:34.354 "num_base_bdevs_discovered": 2, 00:10:34.354 "num_base_bdevs_operational": 4, 00:10:34.354 "base_bdevs_list": [ 00:10:34.354 { 00:10:34.354 "name": null, 00:10:34.354 "uuid": "e60e385b-dbf0-4ae7-91e0-4087aae5f315", 00:10:34.354 "is_configured": false, 00:10:34.354 "data_offset": 0, 00:10:34.354 "data_size": 65536 00:10:34.354 }, 00:10:34.354 { 00:10:34.354 "name": null, 00:10:34.354 "uuid": "2f4e2cc5-9650-4c4e-9240-228e5e59a0fb", 00:10:34.354 "is_configured": false, 00:10:34.354 "data_offset": 0, 00:10:34.354 "data_size": 65536 00:10:34.354 }, 00:10:34.354 { 00:10:34.354 "name": "BaseBdev3", 00:10:34.354 "uuid": "9c570f74-77b7-481d-ac39-cbddaf8649e3", 00:10:34.354 "is_configured": true, 00:10:34.354 "data_offset": 0, 00:10:34.354 "data_size": 65536 00:10:34.354 }, 00:10:34.354 { 00:10:34.354 "name": "BaseBdev4", 00:10:34.354 "uuid": "db8f57bc-b027-49ba-8bb7-eecd3231ee47", 00:10:34.354 "is_configured": true, 00:10:34.354 "data_offset": 0, 00:10:34.354 "data_size": 65536 00:10:34.354 } 00:10:34.354 ] 00:10:34.354 }' 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.354 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.921 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.921 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.921 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.921 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:34.921 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.921 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:34.921 10:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:34.921 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.921 10:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.921 [2024-11-15 10:38:55.998091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.921 "name": "Existed_Raid", 00:10:34.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.921 "strip_size_kb": 64, 00:10:34.921 "state": "configuring", 00:10:34.921 "raid_level": "raid0", 00:10:34.921 "superblock": false, 00:10:34.921 "num_base_bdevs": 4, 00:10:34.921 "num_base_bdevs_discovered": 3, 00:10:34.921 "num_base_bdevs_operational": 4, 00:10:34.921 "base_bdevs_list": [ 00:10:34.921 { 00:10:34.921 "name": null, 00:10:34.921 "uuid": "e60e385b-dbf0-4ae7-91e0-4087aae5f315", 00:10:34.921 "is_configured": false, 00:10:34.921 "data_offset": 0, 00:10:34.921 "data_size": 65536 00:10:34.921 }, 00:10:34.921 { 00:10:34.921 "name": "BaseBdev2", 00:10:34.921 "uuid": "2f4e2cc5-9650-4c4e-9240-228e5e59a0fb", 00:10:34.921 "is_configured": true, 00:10:34.921 "data_offset": 0, 00:10:34.921 "data_size": 65536 00:10:34.921 }, 00:10:34.921 { 00:10:34.921 "name": "BaseBdev3", 00:10:34.921 "uuid": "9c570f74-77b7-481d-ac39-cbddaf8649e3", 00:10:34.921 "is_configured": true, 00:10:34.921 "data_offset": 0, 00:10:34.921 "data_size": 65536 00:10:34.921 }, 00:10:34.921 { 00:10:34.921 "name": "BaseBdev4", 00:10:34.921 "uuid": "db8f57bc-b027-49ba-8bb7-eecd3231ee47", 00:10:34.921 "is_configured": true, 00:10:34.921 "data_offset": 0, 00:10:34.921 "data_size": 65536 00:10:34.921 } 00:10:34.921 ] 00:10:34.921 }' 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.921 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.487 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.487 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:35.487 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.487 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.487 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.487 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:35.487 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.487 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.487 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:35.487 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.488 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.488 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e60e385b-dbf0-4ae7-91e0-4087aae5f315 00:10:35.488 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.488 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.746 [2024-11-15 10:38:56.660226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:35.746 [2024-11-15 10:38:56.660452] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:35.746 [2024-11-15 10:38:56.660477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:35.746 [2024-11-15 10:38:56.660859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:35.746 [2024-11-15 10:38:56.661075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:35.746 [2024-11-15 10:38:56.661098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:35.746 [2024-11-15 10:38:56.661404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.746 NewBaseBdev 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.746 [ 00:10:35.746 { 00:10:35.746 "name": "NewBaseBdev", 00:10:35.746 "aliases": [ 00:10:35.746 "e60e385b-dbf0-4ae7-91e0-4087aae5f315" 00:10:35.746 ], 00:10:35.746 "product_name": "Malloc disk", 00:10:35.746 "block_size": 512, 00:10:35.746 "num_blocks": 65536, 00:10:35.746 "uuid": "e60e385b-dbf0-4ae7-91e0-4087aae5f315", 00:10:35.746 "assigned_rate_limits": { 00:10:35.746 "rw_ios_per_sec": 0, 00:10:35.746 "rw_mbytes_per_sec": 0, 00:10:35.746 "r_mbytes_per_sec": 0, 00:10:35.746 "w_mbytes_per_sec": 0 00:10:35.746 }, 00:10:35.746 "claimed": true, 00:10:35.746 "claim_type": "exclusive_write", 00:10:35.746 "zoned": false, 00:10:35.746 "supported_io_types": { 00:10:35.746 "read": true, 00:10:35.746 "write": true, 00:10:35.746 "unmap": true, 00:10:35.746 "flush": true, 00:10:35.746 "reset": true, 00:10:35.746 "nvme_admin": false, 00:10:35.746 "nvme_io": false, 00:10:35.746 "nvme_io_md": false, 00:10:35.746 "write_zeroes": true, 00:10:35.746 "zcopy": true, 00:10:35.746 "get_zone_info": false, 00:10:35.746 "zone_management": false, 00:10:35.746 "zone_append": false, 00:10:35.746 "compare": false, 00:10:35.746 "compare_and_write": false, 00:10:35.746 "abort": true, 00:10:35.746 "seek_hole": false, 00:10:35.746 "seek_data": false, 00:10:35.746 "copy": true, 00:10:35.746 "nvme_iov_md": false 00:10:35.746 }, 00:10:35.746 "memory_domains": [ 00:10:35.746 { 00:10:35.746 "dma_device_id": "system", 00:10:35.746 "dma_device_type": 1 00:10:35.746 }, 00:10:35.746 { 00:10:35.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.746 "dma_device_type": 2 00:10:35.746 } 00:10:35.746 ], 00:10:35.746 "driver_specific": {} 00:10:35.746 } 00:10:35.746 ] 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.746 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.747 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.747 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.747 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.747 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.747 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.747 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.747 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.747 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.747 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.747 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.747 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.747 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.747 "name": "Existed_Raid", 00:10:35.747 "uuid": "6947aa0f-8ab0-45db-b1f4-8e6f2d68cb1d", 00:10:35.747 "strip_size_kb": 64, 00:10:35.747 "state": "online", 00:10:35.747 "raid_level": "raid0", 00:10:35.747 "superblock": false, 00:10:35.747 "num_base_bdevs": 4, 00:10:35.747 "num_base_bdevs_discovered": 4, 00:10:35.747 "num_base_bdevs_operational": 4, 00:10:35.747 "base_bdevs_list": [ 00:10:35.747 { 00:10:35.747 "name": "NewBaseBdev", 00:10:35.747 "uuid": "e60e385b-dbf0-4ae7-91e0-4087aae5f315", 00:10:35.747 "is_configured": true, 00:10:35.747 "data_offset": 0, 00:10:35.747 "data_size": 65536 00:10:35.747 }, 00:10:35.747 { 00:10:35.747 "name": "BaseBdev2", 00:10:35.747 "uuid": "2f4e2cc5-9650-4c4e-9240-228e5e59a0fb", 00:10:35.747 "is_configured": true, 00:10:35.747 "data_offset": 0, 00:10:35.747 "data_size": 65536 00:10:35.747 }, 00:10:35.747 { 00:10:35.747 "name": "BaseBdev3", 00:10:35.747 "uuid": "9c570f74-77b7-481d-ac39-cbddaf8649e3", 00:10:35.747 "is_configured": true, 00:10:35.747 "data_offset": 0, 00:10:35.747 "data_size": 65536 00:10:35.747 }, 00:10:35.747 { 00:10:35.747 "name": "BaseBdev4", 00:10:35.747 "uuid": "db8f57bc-b027-49ba-8bb7-eecd3231ee47", 00:10:35.747 "is_configured": true, 00:10:35.747 "data_offset": 0, 00:10:35.747 "data_size": 65536 00:10:35.747 } 00:10:35.747 ] 00:10:35.747 }' 00:10:35.747 10:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.747 10:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.314 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.314 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:36.314 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.314 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.314 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.314 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.314 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:36.314 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.314 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.314 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.314 [2024-11-15 10:38:57.228937] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.314 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.314 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.314 "name": "Existed_Raid", 00:10:36.314 "aliases": [ 00:10:36.314 "6947aa0f-8ab0-45db-b1f4-8e6f2d68cb1d" 00:10:36.314 ], 00:10:36.314 "product_name": "Raid Volume", 00:10:36.314 "block_size": 512, 00:10:36.314 "num_blocks": 262144, 00:10:36.314 "uuid": "6947aa0f-8ab0-45db-b1f4-8e6f2d68cb1d", 00:10:36.314 "assigned_rate_limits": { 00:10:36.314 "rw_ios_per_sec": 0, 00:10:36.314 "rw_mbytes_per_sec": 0, 00:10:36.314 "r_mbytes_per_sec": 0, 00:10:36.314 "w_mbytes_per_sec": 0 00:10:36.314 }, 00:10:36.314 "claimed": false, 00:10:36.314 "zoned": false, 00:10:36.314 "supported_io_types": { 00:10:36.314 "read": true, 00:10:36.314 "write": true, 00:10:36.314 "unmap": true, 00:10:36.314 "flush": true, 00:10:36.314 "reset": true, 00:10:36.314 "nvme_admin": false, 00:10:36.314 "nvme_io": false, 00:10:36.314 "nvme_io_md": false, 00:10:36.314 "write_zeroes": true, 00:10:36.314 "zcopy": false, 00:10:36.314 "get_zone_info": false, 00:10:36.314 "zone_management": false, 00:10:36.314 "zone_append": false, 00:10:36.314 "compare": false, 00:10:36.314 "compare_and_write": false, 00:10:36.314 "abort": false, 00:10:36.314 "seek_hole": false, 00:10:36.314 "seek_data": false, 00:10:36.314 "copy": false, 00:10:36.314 "nvme_iov_md": false 00:10:36.314 }, 00:10:36.314 "memory_domains": [ 00:10:36.314 { 00:10:36.314 "dma_device_id": "system", 00:10:36.314 "dma_device_type": 1 00:10:36.314 }, 00:10:36.314 { 00:10:36.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.314 "dma_device_type": 2 00:10:36.314 }, 00:10:36.314 { 00:10:36.314 "dma_device_id": "system", 00:10:36.314 "dma_device_type": 1 00:10:36.314 }, 00:10:36.314 { 00:10:36.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.314 "dma_device_type": 2 00:10:36.314 }, 00:10:36.314 { 00:10:36.314 "dma_device_id": "system", 00:10:36.314 "dma_device_type": 1 00:10:36.314 }, 00:10:36.314 { 00:10:36.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.314 "dma_device_type": 2 00:10:36.314 }, 00:10:36.314 { 00:10:36.314 "dma_device_id": "system", 00:10:36.314 "dma_device_type": 1 00:10:36.314 }, 00:10:36.314 { 00:10:36.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.315 "dma_device_type": 2 00:10:36.315 } 00:10:36.315 ], 00:10:36.315 "driver_specific": { 00:10:36.315 "raid": { 00:10:36.315 "uuid": "6947aa0f-8ab0-45db-b1f4-8e6f2d68cb1d", 00:10:36.315 "strip_size_kb": 64, 00:10:36.315 "state": "online", 00:10:36.315 "raid_level": "raid0", 00:10:36.315 "superblock": false, 00:10:36.315 "num_base_bdevs": 4, 00:10:36.315 "num_base_bdevs_discovered": 4, 00:10:36.315 "num_base_bdevs_operational": 4, 00:10:36.315 "base_bdevs_list": [ 00:10:36.315 { 00:10:36.315 "name": "NewBaseBdev", 00:10:36.315 "uuid": "e60e385b-dbf0-4ae7-91e0-4087aae5f315", 00:10:36.315 "is_configured": true, 00:10:36.315 "data_offset": 0, 00:10:36.315 "data_size": 65536 00:10:36.315 }, 00:10:36.315 { 00:10:36.315 "name": "BaseBdev2", 00:10:36.315 "uuid": "2f4e2cc5-9650-4c4e-9240-228e5e59a0fb", 00:10:36.315 "is_configured": true, 00:10:36.315 "data_offset": 0, 00:10:36.315 "data_size": 65536 00:10:36.315 }, 00:10:36.315 { 00:10:36.315 "name": "BaseBdev3", 00:10:36.315 "uuid": "9c570f74-77b7-481d-ac39-cbddaf8649e3", 00:10:36.315 "is_configured": true, 00:10:36.315 "data_offset": 0, 00:10:36.315 "data_size": 65536 00:10:36.315 }, 00:10:36.315 { 00:10:36.315 "name": "BaseBdev4", 00:10:36.315 "uuid": "db8f57bc-b027-49ba-8bb7-eecd3231ee47", 00:10:36.315 "is_configured": true, 00:10:36.315 "data_offset": 0, 00:10:36.315 "data_size": 65536 00:10:36.315 } 00:10:36.315 ] 00:10:36.315 } 00:10:36.315 } 00:10:36.315 }' 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:36.315 BaseBdev2 00:10:36.315 BaseBdev3 00:10:36.315 BaseBdev4' 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.315 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.574 [2024-11-15 10:38:57.576589] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.574 [2024-11-15 10:38:57.576762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.574 [2024-11-15 10:38:57.576989] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.574 [2024-11-15 10:38:57.577191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.574 [2024-11-15 10:38:57.577308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69432 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69432 ']' 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69432 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69432 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69432' 00:10:36.574 killing process with pid 69432 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69432 00:10:36.574 [2024-11-15 10:38:57.610856] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.574 10:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69432 00:10:36.833 [2024-11-15 10:38:57.955167] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.207 10:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:38.207 00:10:38.207 real 0m12.629s 00:10:38.207 user 0m21.055s 00:10:38.207 sys 0m1.657s 00:10:38.207 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.207 ************************************ 00:10:38.207 END TEST raid_state_function_test 00:10:38.207 ************************************ 00:10:38.207 10:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.207 10:38:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:38.207 10:38:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:38.207 10:38:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.207 10:38:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.207 ************************************ 00:10:38.207 START TEST raid_state_function_test_sb 00:10:38.207 ************************************ 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.207 Process raid pid: 70123 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:38.207 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70123 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70123' 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70123 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70123 ']' 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.208 10:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.208 [2024-11-15 10:38:59.159729] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:10:38.208 [2024-11-15 10:38:59.160161] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.208 [2024-11-15 10:38:59.346142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.466 [2024-11-15 10:38:59.478785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.725 [2024-11-15 10:38:59.686374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.725 [2024-11-15 10:38:59.686566] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.292 [2024-11-15 10:39:00.197792] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.292 [2024-11-15 10:39:00.197992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.292 [2024-11-15 10:39:00.198022] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.292 [2024-11-15 10:39:00.198041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.292 [2024-11-15 10:39:00.198052] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.292 [2024-11-15 10:39:00.198067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.292 [2024-11-15 10:39:00.198077] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:39.292 [2024-11-15 10:39:00.198091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.292 "name": "Existed_Raid", 00:10:39.292 "uuid": "8d5ede4a-b7de-42f3-8577-8cbac8fd28f9", 00:10:39.292 "strip_size_kb": 64, 00:10:39.292 "state": "configuring", 00:10:39.292 "raid_level": "raid0", 00:10:39.292 "superblock": true, 00:10:39.292 "num_base_bdevs": 4, 00:10:39.292 "num_base_bdevs_discovered": 0, 00:10:39.292 "num_base_bdevs_operational": 4, 00:10:39.292 "base_bdevs_list": [ 00:10:39.292 { 00:10:39.292 "name": "BaseBdev1", 00:10:39.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.292 "is_configured": false, 00:10:39.292 "data_offset": 0, 00:10:39.292 "data_size": 0 00:10:39.292 }, 00:10:39.292 { 00:10:39.292 "name": "BaseBdev2", 00:10:39.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.292 "is_configured": false, 00:10:39.292 "data_offset": 0, 00:10:39.292 "data_size": 0 00:10:39.292 }, 00:10:39.292 { 00:10:39.292 "name": "BaseBdev3", 00:10:39.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.292 "is_configured": false, 00:10:39.292 "data_offset": 0, 00:10:39.292 "data_size": 0 00:10:39.292 }, 00:10:39.292 { 00:10:39.292 "name": "BaseBdev4", 00:10:39.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.292 "is_configured": false, 00:10:39.292 "data_offset": 0, 00:10:39.292 "data_size": 0 00:10:39.292 } 00:10:39.292 ] 00:10:39.292 }' 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.292 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.550 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.550 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.550 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.550 [2024-11-15 10:39:00.701875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.550 [2024-11-15 10:39:00.701922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:39.550 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.550 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:39.550 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.550 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.808 [2024-11-15 10:39:00.709846] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.809 [2024-11-15 10:39:00.709900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.809 [2024-11-15 10:39:00.709916] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.809 [2024-11-15 10:39:00.709932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.809 [2024-11-15 10:39:00.709942] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.809 [2024-11-15 10:39:00.709956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.809 [2024-11-15 10:39:00.709965] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:39.809 [2024-11-15 10:39:00.709979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.809 [2024-11-15 10:39:00.754917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.809 BaseBdev1 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.809 [ 00:10:39.809 { 00:10:39.809 "name": "BaseBdev1", 00:10:39.809 "aliases": [ 00:10:39.809 "2625f2d6-abb0-437b-acb2-49961d339e74" 00:10:39.809 ], 00:10:39.809 "product_name": "Malloc disk", 00:10:39.809 "block_size": 512, 00:10:39.809 "num_blocks": 65536, 00:10:39.809 "uuid": "2625f2d6-abb0-437b-acb2-49961d339e74", 00:10:39.809 "assigned_rate_limits": { 00:10:39.809 "rw_ios_per_sec": 0, 00:10:39.809 "rw_mbytes_per_sec": 0, 00:10:39.809 "r_mbytes_per_sec": 0, 00:10:39.809 "w_mbytes_per_sec": 0 00:10:39.809 }, 00:10:39.809 "claimed": true, 00:10:39.809 "claim_type": "exclusive_write", 00:10:39.809 "zoned": false, 00:10:39.809 "supported_io_types": { 00:10:39.809 "read": true, 00:10:39.809 "write": true, 00:10:39.809 "unmap": true, 00:10:39.809 "flush": true, 00:10:39.809 "reset": true, 00:10:39.809 "nvme_admin": false, 00:10:39.809 "nvme_io": false, 00:10:39.809 "nvme_io_md": false, 00:10:39.809 "write_zeroes": true, 00:10:39.809 "zcopy": true, 00:10:39.809 "get_zone_info": false, 00:10:39.809 "zone_management": false, 00:10:39.809 "zone_append": false, 00:10:39.809 "compare": false, 00:10:39.809 "compare_and_write": false, 00:10:39.809 "abort": true, 00:10:39.809 "seek_hole": false, 00:10:39.809 "seek_data": false, 00:10:39.809 "copy": true, 00:10:39.809 "nvme_iov_md": false 00:10:39.809 }, 00:10:39.809 "memory_domains": [ 00:10:39.809 { 00:10:39.809 "dma_device_id": "system", 00:10:39.809 "dma_device_type": 1 00:10:39.809 }, 00:10:39.809 { 00:10:39.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.809 "dma_device_type": 2 00:10:39.809 } 00:10:39.809 ], 00:10:39.809 "driver_specific": {} 00:10:39.809 } 00:10:39.809 ] 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.809 "name": "Existed_Raid", 00:10:39.809 "uuid": "166ca43a-b781-4e36-9311-6d66f100cbed", 00:10:39.809 "strip_size_kb": 64, 00:10:39.809 "state": "configuring", 00:10:39.809 "raid_level": "raid0", 00:10:39.809 "superblock": true, 00:10:39.809 "num_base_bdevs": 4, 00:10:39.809 "num_base_bdevs_discovered": 1, 00:10:39.809 "num_base_bdevs_operational": 4, 00:10:39.809 "base_bdevs_list": [ 00:10:39.809 { 00:10:39.809 "name": "BaseBdev1", 00:10:39.809 "uuid": "2625f2d6-abb0-437b-acb2-49961d339e74", 00:10:39.809 "is_configured": true, 00:10:39.809 "data_offset": 2048, 00:10:39.809 "data_size": 63488 00:10:39.809 }, 00:10:39.809 { 00:10:39.809 "name": "BaseBdev2", 00:10:39.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.809 "is_configured": false, 00:10:39.809 "data_offset": 0, 00:10:39.809 "data_size": 0 00:10:39.809 }, 00:10:39.809 { 00:10:39.809 "name": "BaseBdev3", 00:10:39.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.809 "is_configured": false, 00:10:39.809 "data_offset": 0, 00:10:39.809 "data_size": 0 00:10:39.809 }, 00:10:39.809 { 00:10:39.809 "name": "BaseBdev4", 00:10:39.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.809 "is_configured": false, 00:10:39.809 "data_offset": 0, 00:10:39.809 "data_size": 0 00:10:39.809 } 00:10:39.809 ] 00:10:39.809 }' 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.809 10:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.375 [2024-11-15 10:39:01.299107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.375 [2024-11-15 10:39:01.299312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.375 [2024-11-15 10:39:01.307162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.375 [2024-11-15 10:39:01.309718] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.375 [2024-11-15 10:39:01.309893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.375 [2024-11-15 10:39:01.309929] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.375 [2024-11-15 10:39:01.309950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.375 [2024-11-15 10:39:01.309961] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:40.375 [2024-11-15 10:39:01.309974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.375 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.375 "name": "Existed_Raid", 00:10:40.375 "uuid": "c98f0e05-ff30-4bb9-abb5-d9c62383de6b", 00:10:40.375 "strip_size_kb": 64, 00:10:40.375 "state": "configuring", 00:10:40.375 "raid_level": "raid0", 00:10:40.375 "superblock": true, 00:10:40.375 "num_base_bdevs": 4, 00:10:40.375 "num_base_bdevs_discovered": 1, 00:10:40.375 "num_base_bdevs_operational": 4, 00:10:40.375 "base_bdevs_list": [ 00:10:40.375 { 00:10:40.375 "name": "BaseBdev1", 00:10:40.375 "uuid": "2625f2d6-abb0-437b-acb2-49961d339e74", 00:10:40.375 "is_configured": true, 00:10:40.375 "data_offset": 2048, 00:10:40.375 "data_size": 63488 00:10:40.375 }, 00:10:40.376 { 00:10:40.376 "name": "BaseBdev2", 00:10:40.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.376 "is_configured": false, 00:10:40.376 "data_offset": 0, 00:10:40.376 "data_size": 0 00:10:40.376 }, 00:10:40.376 { 00:10:40.376 "name": "BaseBdev3", 00:10:40.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.376 "is_configured": false, 00:10:40.376 "data_offset": 0, 00:10:40.376 "data_size": 0 00:10:40.376 }, 00:10:40.376 { 00:10:40.376 "name": "BaseBdev4", 00:10:40.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.376 "is_configured": false, 00:10:40.376 "data_offset": 0, 00:10:40.376 "data_size": 0 00:10:40.376 } 00:10:40.376 ] 00:10:40.376 }' 00:10:40.376 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.376 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.942 [2024-11-15 10:39:01.894237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.942 BaseBdev2 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.942 [ 00:10:40.942 { 00:10:40.942 "name": "BaseBdev2", 00:10:40.942 "aliases": [ 00:10:40.942 "414a0218-34a8-4b7c-817f-36400c309cec" 00:10:40.942 ], 00:10:40.942 "product_name": "Malloc disk", 00:10:40.942 "block_size": 512, 00:10:40.942 "num_blocks": 65536, 00:10:40.942 "uuid": "414a0218-34a8-4b7c-817f-36400c309cec", 00:10:40.942 "assigned_rate_limits": { 00:10:40.942 "rw_ios_per_sec": 0, 00:10:40.942 "rw_mbytes_per_sec": 0, 00:10:40.942 "r_mbytes_per_sec": 0, 00:10:40.942 "w_mbytes_per_sec": 0 00:10:40.942 }, 00:10:40.942 "claimed": true, 00:10:40.942 "claim_type": "exclusive_write", 00:10:40.942 "zoned": false, 00:10:40.942 "supported_io_types": { 00:10:40.942 "read": true, 00:10:40.942 "write": true, 00:10:40.942 "unmap": true, 00:10:40.942 "flush": true, 00:10:40.942 "reset": true, 00:10:40.942 "nvme_admin": false, 00:10:40.942 "nvme_io": false, 00:10:40.942 "nvme_io_md": false, 00:10:40.942 "write_zeroes": true, 00:10:40.942 "zcopy": true, 00:10:40.942 "get_zone_info": false, 00:10:40.942 "zone_management": false, 00:10:40.942 "zone_append": false, 00:10:40.942 "compare": false, 00:10:40.942 "compare_and_write": false, 00:10:40.942 "abort": true, 00:10:40.942 "seek_hole": false, 00:10:40.942 "seek_data": false, 00:10:40.942 "copy": true, 00:10:40.942 "nvme_iov_md": false 00:10:40.942 }, 00:10:40.942 "memory_domains": [ 00:10:40.942 { 00:10:40.942 "dma_device_id": "system", 00:10:40.942 "dma_device_type": 1 00:10:40.942 }, 00:10:40.942 { 00:10:40.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.942 "dma_device_type": 2 00:10:40.942 } 00:10:40.942 ], 00:10:40.942 "driver_specific": {} 00:10:40.942 } 00:10:40.942 ] 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.942 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.942 "name": "Existed_Raid", 00:10:40.942 "uuid": "c98f0e05-ff30-4bb9-abb5-d9c62383de6b", 00:10:40.942 "strip_size_kb": 64, 00:10:40.942 "state": "configuring", 00:10:40.942 "raid_level": "raid0", 00:10:40.942 "superblock": true, 00:10:40.943 "num_base_bdevs": 4, 00:10:40.943 "num_base_bdevs_discovered": 2, 00:10:40.943 "num_base_bdevs_operational": 4, 00:10:40.943 "base_bdevs_list": [ 00:10:40.943 { 00:10:40.943 "name": "BaseBdev1", 00:10:40.943 "uuid": "2625f2d6-abb0-437b-acb2-49961d339e74", 00:10:40.943 "is_configured": true, 00:10:40.943 "data_offset": 2048, 00:10:40.943 "data_size": 63488 00:10:40.943 }, 00:10:40.943 { 00:10:40.943 "name": "BaseBdev2", 00:10:40.943 "uuid": "414a0218-34a8-4b7c-817f-36400c309cec", 00:10:40.943 "is_configured": true, 00:10:40.943 "data_offset": 2048, 00:10:40.943 "data_size": 63488 00:10:40.943 }, 00:10:40.943 { 00:10:40.943 "name": "BaseBdev3", 00:10:40.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.943 "is_configured": false, 00:10:40.943 "data_offset": 0, 00:10:40.943 "data_size": 0 00:10:40.943 }, 00:10:40.943 { 00:10:40.943 "name": "BaseBdev4", 00:10:40.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.943 "is_configured": false, 00:10:40.943 "data_offset": 0, 00:10:40.943 "data_size": 0 00:10:40.943 } 00:10:40.943 ] 00:10:40.943 }' 00:10:40.943 10:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.943 10:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.509 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.509 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.509 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.509 [2024-11-15 10:39:02.480602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.509 BaseBdev3 00:10:41.509 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.509 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:41.509 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:41.509 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.509 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.509 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.509 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.509 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.509 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.509 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.509 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.510 [ 00:10:41.510 { 00:10:41.510 "name": "BaseBdev3", 00:10:41.510 "aliases": [ 00:10:41.510 "230502ec-b1a7-46c2-a90a-f58b12444d49" 00:10:41.510 ], 00:10:41.510 "product_name": "Malloc disk", 00:10:41.510 "block_size": 512, 00:10:41.510 "num_blocks": 65536, 00:10:41.510 "uuid": "230502ec-b1a7-46c2-a90a-f58b12444d49", 00:10:41.510 "assigned_rate_limits": { 00:10:41.510 "rw_ios_per_sec": 0, 00:10:41.510 "rw_mbytes_per_sec": 0, 00:10:41.510 "r_mbytes_per_sec": 0, 00:10:41.510 "w_mbytes_per_sec": 0 00:10:41.510 }, 00:10:41.510 "claimed": true, 00:10:41.510 "claim_type": "exclusive_write", 00:10:41.510 "zoned": false, 00:10:41.510 "supported_io_types": { 00:10:41.510 "read": true, 00:10:41.510 "write": true, 00:10:41.510 "unmap": true, 00:10:41.510 "flush": true, 00:10:41.510 "reset": true, 00:10:41.510 "nvme_admin": false, 00:10:41.510 "nvme_io": false, 00:10:41.510 "nvme_io_md": false, 00:10:41.510 "write_zeroes": true, 00:10:41.510 "zcopy": true, 00:10:41.510 "get_zone_info": false, 00:10:41.510 "zone_management": false, 00:10:41.510 "zone_append": false, 00:10:41.510 "compare": false, 00:10:41.510 "compare_and_write": false, 00:10:41.510 "abort": true, 00:10:41.510 "seek_hole": false, 00:10:41.510 "seek_data": false, 00:10:41.510 "copy": true, 00:10:41.510 "nvme_iov_md": false 00:10:41.510 }, 00:10:41.510 "memory_domains": [ 00:10:41.510 { 00:10:41.510 "dma_device_id": "system", 00:10:41.510 "dma_device_type": 1 00:10:41.510 }, 00:10:41.510 { 00:10:41.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.510 "dma_device_type": 2 00:10:41.510 } 00:10:41.510 ], 00:10:41.510 "driver_specific": {} 00:10:41.510 } 00:10:41.510 ] 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.510 "name": "Existed_Raid", 00:10:41.510 "uuid": "c98f0e05-ff30-4bb9-abb5-d9c62383de6b", 00:10:41.510 "strip_size_kb": 64, 00:10:41.510 "state": "configuring", 00:10:41.510 "raid_level": "raid0", 00:10:41.510 "superblock": true, 00:10:41.510 "num_base_bdevs": 4, 00:10:41.510 "num_base_bdevs_discovered": 3, 00:10:41.510 "num_base_bdevs_operational": 4, 00:10:41.510 "base_bdevs_list": [ 00:10:41.510 { 00:10:41.510 "name": "BaseBdev1", 00:10:41.510 "uuid": "2625f2d6-abb0-437b-acb2-49961d339e74", 00:10:41.510 "is_configured": true, 00:10:41.510 "data_offset": 2048, 00:10:41.510 "data_size": 63488 00:10:41.510 }, 00:10:41.510 { 00:10:41.510 "name": "BaseBdev2", 00:10:41.510 "uuid": "414a0218-34a8-4b7c-817f-36400c309cec", 00:10:41.510 "is_configured": true, 00:10:41.510 "data_offset": 2048, 00:10:41.510 "data_size": 63488 00:10:41.510 }, 00:10:41.510 { 00:10:41.510 "name": "BaseBdev3", 00:10:41.510 "uuid": "230502ec-b1a7-46c2-a90a-f58b12444d49", 00:10:41.510 "is_configured": true, 00:10:41.510 "data_offset": 2048, 00:10:41.510 "data_size": 63488 00:10:41.510 }, 00:10:41.510 { 00:10:41.510 "name": "BaseBdev4", 00:10:41.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.510 "is_configured": false, 00:10:41.510 "data_offset": 0, 00:10:41.510 "data_size": 0 00:10:41.510 } 00:10:41.510 ] 00:10:41.510 }' 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.510 10:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.076 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:42.076 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.076 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.076 [2024-11-15 10:39:03.063422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:42.076 BaseBdev4 00:10:42.076 [2024-11-15 10:39:03.063943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:42.076 [2024-11-15 10:39:03.063970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:42.076 [2024-11-15 10:39:03.064312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:42.076 [2024-11-15 10:39:03.064531] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:42.076 [2024-11-15 10:39:03.064558] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:42.076 [2024-11-15 10:39:03.064748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.076 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.076 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:42.076 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:42.076 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.076 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:42.076 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.076 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.076 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.076 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.076 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.077 [ 00:10:42.077 { 00:10:42.077 "name": "BaseBdev4", 00:10:42.077 "aliases": [ 00:10:42.077 "e78c3495-1051-4c09-bd0c-0f6cfb623fea" 00:10:42.077 ], 00:10:42.077 "product_name": "Malloc disk", 00:10:42.077 "block_size": 512, 00:10:42.077 "num_blocks": 65536, 00:10:42.077 "uuid": "e78c3495-1051-4c09-bd0c-0f6cfb623fea", 00:10:42.077 "assigned_rate_limits": { 00:10:42.077 "rw_ios_per_sec": 0, 00:10:42.077 "rw_mbytes_per_sec": 0, 00:10:42.077 "r_mbytes_per_sec": 0, 00:10:42.077 "w_mbytes_per_sec": 0 00:10:42.077 }, 00:10:42.077 "claimed": true, 00:10:42.077 "claim_type": "exclusive_write", 00:10:42.077 "zoned": false, 00:10:42.077 "supported_io_types": { 00:10:42.077 "read": true, 00:10:42.077 "write": true, 00:10:42.077 "unmap": true, 00:10:42.077 "flush": true, 00:10:42.077 "reset": true, 00:10:42.077 "nvme_admin": false, 00:10:42.077 "nvme_io": false, 00:10:42.077 "nvme_io_md": false, 00:10:42.077 "write_zeroes": true, 00:10:42.077 "zcopy": true, 00:10:42.077 "get_zone_info": false, 00:10:42.077 "zone_management": false, 00:10:42.077 "zone_append": false, 00:10:42.077 "compare": false, 00:10:42.077 "compare_and_write": false, 00:10:42.077 "abort": true, 00:10:42.077 "seek_hole": false, 00:10:42.077 "seek_data": false, 00:10:42.077 "copy": true, 00:10:42.077 "nvme_iov_md": false 00:10:42.077 }, 00:10:42.077 "memory_domains": [ 00:10:42.077 { 00:10:42.077 "dma_device_id": "system", 00:10:42.077 "dma_device_type": 1 00:10:42.077 }, 00:10:42.077 { 00:10:42.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.077 "dma_device_type": 2 00:10:42.077 } 00:10:42.077 ], 00:10:42.077 "driver_specific": {} 00:10:42.077 } 00:10:42.077 ] 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.077 "name": "Existed_Raid", 00:10:42.077 "uuid": "c98f0e05-ff30-4bb9-abb5-d9c62383de6b", 00:10:42.077 "strip_size_kb": 64, 00:10:42.077 "state": "online", 00:10:42.077 "raid_level": "raid0", 00:10:42.077 "superblock": true, 00:10:42.077 "num_base_bdevs": 4, 00:10:42.077 "num_base_bdevs_discovered": 4, 00:10:42.077 "num_base_bdevs_operational": 4, 00:10:42.077 "base_bdevs_list": [ 00:10:42.077 { 00:10:42.077 "name": "BaseBdev1", 00:10:42.077 "uuid": "2625f2d6-abb0-437b-acb2-49961d339e74", 00:10:42.077 "is_configured": true, 00:10:42.077 "data_offset": 2048, 00:10:42.077 "data_size": 63488 00:10:42.077 }, 00:10:42.077 { 00:10:42.077 "name": "BaseBdev2", 00:10:42.077 "uuid": "414a0218-34a8-4b7c-817f-36400c309cec", 00:10:42.077 "is_configured": true, 00:10:42.077 "data_offset": 2048, 00:10:42.077 "data_size": 63488 00:10:42.077 }, 00:10:42.077 { 00:10:42.077 "name": "BaseBdev3", 00:10:42.077 "uuid": "230502ec-b1a7-46c2-a90a-f58b12444d49", 00:10:42.077 "is_configured": true, 00:10:42.077 "data_offset": 2048, 00:10:42.077 "data_size": 63488 00:10:42.077 }, 00:10:42.077 { 00:10:42.077 "name": "BaseBdev4", 00:10:42.077 "uuid": "e78c3495-1051-4c09-bd0c-0f6cfb623fea", 00:10:42.077 "is_configured": true, 00:10:42.077 "data_offset": 2048, 00:10:42.077 "data_size": 63488 00:10:42.077 } 00:10:42.077 ] 00:10:42.077 }' 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.077 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.644 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:42.644 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:42.644 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:42.644 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:42.644 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:42.644 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:42.644 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:42.644 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.644 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.644 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:42.644 [2024-11-15 10:39:03.576092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.644 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.644 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:42.644 "name": "Existed_Raid", 00:10:42.644 "aliases": [ 00:10:42.644 "c98f0e05-ff30-4bb9-abb5-d9c62383de6b" 00:10:42.644 ], 00:10:42.644 "product_name": "Raid Volume", 00:10:42.644 "block_size": 512, 00:10:42.644 "num_blocks": 253952, 00:10:42.644 "uuid": "c98f0e05-ff30-4bb9-abb5-d9c62383de6b", 00:10:42.645 "assigned_rate_limits": { 00:10:42.645 "rw_ios_per_sec": 0, 00:10:42.645 "rw_mbytes_per_sec": 0, 00:10:42.645 "r_mbytes_per_sec": 0, 00:10:42.645 "w_mbytes_per_sec": 0 00:10:42.645 }, 00:10:42.645 "claimed": false, 00:10:42.645 "zoned": false, 00:10:42.645 "supported_io_types": { 00:10:42.645 "read": true, 00:10:42.645 "write": true, 00:10:42.645 "unmap": true, 00:10:42.645 "flush": true, 00:10:42.645 "reset": true, 00:10:42.645 "nvme_admin": false, 00:10:42.645 "nvme_io": false, 00:10:42.645 "nvme_io_md": false, 00:10:42.645 "write_zeroes": true, 00:10:42.645 "zcopy": false, 00:10:42.645 "get_zone_info": false, 00:10:42.645 "zone_management": false, 00:10:42.645 "zone_append": false, 00:10:42.645 "compare": false, 00:10:42.645 "compare_and_write": false, 00:10:42.645 "abort": false, 00:10:42.645 "seek_hole": false, 00:10:42.645 "seek_data": false, 00:10:42.645 "copy": false, 00:10:42.645 "nvme_iov_md": false 00:10:42.645 }, 00:10:42.645 "memory_domains": [ 00:10:42.645 { 00:10:42.645 "dma_device_id": "system", 00:10:42.645 "dma_device_type": 1 00:10:42.645 }, 00:10:42.645 { 00:10:42.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.645 "dma_device_type": 2 00:10:42.645 }, 00:10:42.645 { 00:10:42.645 "dma_device_id": "system", 00:10:42.645 "dma_device_type": 1 00:10:42.645 }, 00:10:42.645 { 00:10:42.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.645 "dma_device_type": 2 00:10:42.645 }, 00:10:42.645 { 00:10:42.645 "dma_device_id": "system", 00:10:42.645 "dma_device_type": 1 00:10:42.645 }, 00:10:42.645 { 00:10:42.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.645 "dma_device_type": 2 00:10:42.645 }, 00:10:42.645 { 00:10:42.645 "dma_device_id": "system", 00:10:42.645 "dma_device_type": 1 00:10:42.645 }, 00:10:42.645 { 00:10:42.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.645 "dma_device_type": 2 00:10:42.645 } 00:10:42.645 ], 00:10:42.645 "driver_specific": { 00:10:42.645 "raid": { 00:10:42.645 "uuid": "c98f0e05-ff30-4bb9-abb5-d9c62383de6b", 00:10:42.645 "strip_size_kb": 64, 00:10:42.645 "state": "online", 00:10:42.645 "raid_level": "raid0", 00:10:42.645 "superblock": true, 00:10:42.645 "num_base_bdevs": 4, 00:10:42.645 "num_base_bdevs_discovered": 4, 00:10:42.645 "num_base_bdevs_operational": 4, 00:10:42.645 "base_bdevs_list": [ 00:10:42.645 { 00:10:42.645 "name": "BaseBdev1", 00:10:42.645 "uuid": "2625f2d6-abb0-437b-acb2-49961d339e74", 00:10:42.645 "is_configured": true, 00:10:42.645 "data_offset": 2048, 00:10:42.645 "data_size": 63488 00:10:42.645 }, 00:10:42.645 { 00:10:42.645 "name": "BaseBdev2", 00:10:42.645 "uuid": "414a0218-34a8-4b7c-817f-36400c309cec", 00:10:42.645 "is_configured": true, 00:10:42.645 "data_offset": 2048, 00:10:42.645 "data_size": 63488 00:10:42.645 }, 00:10:42.645 { 00:10:42.645 "name": "BaseBdev3", 00:10:42.645 "uuid": "230502ec-b1a7-46c2-a90a-f58b12444d49", 00:10:42.645 "is_configured": true, 00:10:42.645 "data_offset": 2048, 00:10:42.645 "data_size": 63488 00:10:42.645 }, 00:10:42.645 { 00:10:42.645 "name": "BaseBdev4", 00:10:42.645 "uuid": "e78c3495-1051-4c09-bd0c-0f6cfb623fea", 00:10:42.645 "is_configured": true, 00:10:42.645 "data_offset": 2048, 00:10:42.645 "data_size": 63488 00:10:42.645 } 00:10:42.645 ] 00:10:42.645 } 00:10:42.645 } 00:10:42.645 }' 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:42.645 BaseBdev2 00:10:42.645 BaseBdev3 00:10:42.645 BaseBdev4' 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.645 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.904 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.905 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.905 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.905 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.905 10:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:42.905 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.905 10:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.905 [2024-11-15 10:39:03.939814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.905 [2024-11-15 10:39:03.939986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.905 [2024-11-15 10:39:03.940164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.905 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.163 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.163 "name": "Existed_Raid", 00:10:43.163 "uuid": "c98f0e05-ff30-4bb9-abb5-d9c62383de6b", 00:10:43.163 "strip_size_kb": 64, 00:10:43.163 "state": "offline", 00:10:43.163 "raid_level": "raid0", 00:10:43.163 "superblock": true, 00:10:43.163 "num_base_bdevs": 4, 00:10:43.163 "num_base_bdevs_discovered": 3, 00:10:43.163 "num_base_bdevs_operational": 3, 00:10:43.163 "base_bdevs_list": [ 00:10:43.163 { 00:10:43.163 "name": null, 00:10:43.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.163 "is_configured": false, 00:10:43.163 "data_offset": 0, 00:10:43.163 "data_size": 63488 00:10:43.163 }, 00:10:43.163 { 00:10:43.163 "name": "BaseBdev2", 00:10:43.163 "uuid": "414a0218-34a8-4b7c-817f-36400c309cec", 00:10:43.163 "is_configured": true, 00:10:43.163 "data_offset": 2048, 00:10:43.163 "data_size": 63488 00:10:43.163 }, 00:10:43.163 { 00:10:43.163 "name": "BaseBdev3", 00:10:43.163 "uuid": "230502ec-b1a7-46c2-a90a-f58b12444d49", 00:10:43.163 "is_configured": true, 00:10:43.163 "data_offset": 2048, 00:10:43.163 "data_size": 63488 00:10:43.163 }, 00:10:43.163 { 00:10:43.163 "name": "BaseBdev4", 00:10:43.163 "uuid": "e78c3495-1051-4c09-bd0c-0f6cfb623fea", 00:10:43.163 "is_configured": true, 00:10:43.163 "data_offset": 2048, 00:10:43.163 "data_size": 63488 00:10:43.163 } 00:10:43.163 ] 00:10:43.163 }' 00:10:43.163 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.163 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.729 [2024-11-15 10:39:04.648930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.729 [2024-11-15 10:39:04.795662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.729 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.996 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.996 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:43.996 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.996 10:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:43.996 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.996 10:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.996 [2024-11-15 10:39:04.957707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:43.996 [2024-11-15 10:39:04.957890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.996 BaseBdev2 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.996 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.284 [ 00:10:44.284 { 00:10:44.284 "name": "BaseBdev2", 00:10:44.284 "aliases": [ 00:10:44.284 "bf17ec8b-c648-4bd9-9fb1-ee3d12a9b8dc" 00:10:44.284 ], 00:10:44.284 "product_name": "Malloc disk", 00:10:44.284 "block_size": 512, 00:10:44.284 "num_blocks": 65536, 00:10:44.284 "uuid": "bf17ec8b-c648-4bd9-9fb1-ee3d12a9b8dc", 00:10:44.284 "assigned_rate_limits": { 00:10:44.284 "rw_ios_per_sec": 0, 00:10:44.284 "rw_mbytes_per_sec": 0, 00:10:44.284 "r_mbytes_per_sec": 0, 00:10:44.284 "w_mbytes_per_sec": 0 00:10:44.284 }, 00:10:44.284 "claimed": false, 00:10:44.284 "zoned": false, 00:10:44.284 "supported_io_types": { 00:10:44.284 "read": true, 00:10:44.284 "write": true, 00:10:44.284 "unmap": true, 00:10:44.284 "flush": true, 00:10:44.284 "reset": true, 00:10:44.284 "nvme_admin": false, 00:10:44.284 "nvme_io": false, 00:10:44.284 "nvme_io_md": false, 00:10:44.284 "write_zeroes": true, 00:10:44.284 "zcopy": true, 00:10:44.284 "get_zone_info": false, 00:10:44.284 "zone_management": false, 00:10:44.284 "zone_append": false, 00:10:44.284 "compare": false, 00:10:44.284 "compare_and_write": false, 00:10:44.284 "abort": true, 00:10:44.284 "seek_hole": false, 00:10:44.284 "seek_data": false, 00:10:44.284 "copy": true, 00:10:44.284 "nvme_iov_md": false 00:10:44.284 }, 00:10:44.284 "memory_domains": [ 00:10:44.284 { 00:10:44.284 "dma_device_id": "system", 00:10:44.284 "dma_device_type": 1 00:10:44.284 }, 00:10:44.284 { 00:10:44.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.284 "dma_device_type": 2 00:10:44.284 } 00:10:44.284 ], 00:10:44.284 "driver_specific": {} 00:10:44.284 } 00:10:44.284 ] 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.284 BaseBdev3 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.284 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.284 [ 00:10:44.284 { 00:10:44.284 "name": "BaseBdev3", 00:10:44.284 "aliases": [ 00:10:44.284 "b1b7044b-4afc-47cd-b04b-f68f14cce9f3" 00:10:44.284 ], 00:10:44.284 "product_name": "Malloc disk", 00:10:44.284 "block_size": 512, 00:10:44.284 "num_blocks": 65536, 00:10:44.284 "uuid": "b1b7044b-4afc-47cd-b04b-f68f14cce9f3", 00:10:44.284 "assigned_rate_limits": { 00:10:44.284 "rw_ios_per_sec": 0, 00:10:44.284 "rw_mbytes_per_sec": 0, 00:10:44.284 "r_mbytes_per_sec": 0, 00:10:44.284 "w_mbytes_per_sec": 0 00:10:44.284 }, 00:10:44.284 "claimed": false, 00:10:44.284 "zoned": false, 00:10:44.284 "supported_io_types": { 00:10:44.284 "read": true, 00:10:44.284 "write": true, 00:10:44.284 "unmap": true, 00:10:44.284 "flush": true, 00:10:44.284 "reset": true, 00:10:44.284 "nvme_admin": false, 00:10:44.284 "nvme_io": false, 00:10:44.284 "nvme_io_md": false, 00:10:44.284 "write_zeroes": true, 00:10:44.284 "zcopy": true, 00:10:44.284 "get_zone_info": false, 00:10:44.284 "zone_management": false, 00:10:44.284 "zone_append": false, 00:10:44.284 "compare": false, 00:10:44.284 "compare_and_write": false, 00:10:44.284 "abort": true, 00:10:44.284 "seek_hole": false, 00:10:44.284 "seek_data": false, 00:10:44.284 "copy": true, 00:10:44.284 "nvme_iov_md": false 00:10:44.284 }, 00:10:44.284 "memory_domains": [ 00:10:44.284 { 00:10:44.284 "dma_device_id": "system", 00:10:44.284 "dma_device_type": 1 00:10:44.284 }, 00:10:44.284 { 00:10:44.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.284 "dma_device_type": 2 00:10:44.284 } 00:10:44.284 ], 00:10:44.284 "driver_specific": {} 00:10:44.284 } 00:10:44.284 ] 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.285 BaseBdev4 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.285 [ 00:10:44.285 { 00:10:44.285 "name": "BaseBdev4", 00:10:44.285 "aliases": [ 00:10:44.285 "87f83fb7-ddb1-4784-9960-4786c4ce794a" 00:10:44.285 ], 00:10:44.285 "product_name": "Malloc disk", 00:10:44.285 "block_size": 512, 00:10:44.285 "num_blocks": 65536, 00:10:44.285 "uuid": "87f83fb7-ddb1-4784-9960-4786c4ce794a", 00:10:44.285 "assigned_rate_limits": { 00:10:44.285 "rw_ios_per_sec": 0, 00:10:44.285 "rw_mbytes_per_sec": 0, 00:10:44.285 "r_mbytes_per_sec": 0, 00:10:44.285 "w_mbytes_per_sec": 0 00:10:44.285 }, 00:10:44.285 "claimed": false, 00:10:44.285 "zoned": false, 00:10:44.285 "supported_io_types": { 00:10:44.285 "read": true, 00:10:44.285 "write": true, 00:10:44.285 "unmap": true, 00:10:44.285 "flush": true, 00:10:44.285 "reset": true, 00:10:44.285 "nvme_admin": false, 00:10:44.285 "nvme_io": false, 00:10:44.285 "nvme_io_md": false, 00:10:44.285 "write_zeroes": true, 00:10:44.285 "zcopy": true, 00:10:44.285 "get_zone_info": false, 00:10:44.285 "zone_management": false, 00:10:44.285 "zone_append": false, 00:10:44.285 "compare": false, 00:10:44.285 "compare_and_write": false, 00:10:44.285 "abort": true, 00:10:44.285 "seek_hole": false, 00:10:44.285 "seek_data": false, 00:10:44.285 "copy": true, 00:10:44.285 "nvme_iov_md": false 00:10:44.285 }, 00:10:44.285 "memory_domains": [ 00:10:44.285 { 00:10:44.285 "dma_device_id": "system", 00:10:44.285 "dma_device_type": 1 00:10:44.285 }, 00:10:44.285 { 00:10:44.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.285 "dma_device_type": 2 00:10:44.285 } 00:10:44.285 ], 00:10:44.285 "driver_specific": {} 00:10:44.285 } 00:10:44.285 ] 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.285 [2024-11-15 10:39:05.321340] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.285 [2024-11-15 10:39:05.321525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.285 [2024-11-15 10:39:05.321663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.285 [2024-11-15 10:39:05.324163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.285 [2024-11-15 10:39:05.324358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.285 "name": "Existed_Raid", 00:10:44.285 "uuid": "f6f543a6-9e5c-492f-8bc9-97844465b205", 00:10:44.285 "strip_size_kb": 64, 00:10:44.285 "state": "configuring", 00:10:44.285 "raid_level": "raid0", 00:10:44.285 "superblock": true, 00:10:44.285 "num_base_bdevs": 4, 00:10:44.285 "num_base_bdevs_discovered": 3, 00:10:44.285 "num_base_bdevs_operational": 4, 00:10:44.285 "base_bdevs_list": [ 00:10:44.285 { 00:10:44.285 "name": "BaseBdev1", 00:10:44.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.285 "is_configured": false, 00:10:44.285 "data_offset": 0, 00:10:44.285 "data_size": 0 00:10:44.285 }, 00:10:44.285 { 00:10:44.285 "name": "BaseBdev2", 00:10:44.285 "uuid": "bf17ec8b-c648-4bd9-9fb1-ee3d12a9b8dc", 00:10:44.285 "is_configured": true, 00:10:44.285 "data_offset": 2048, 00:10:44.285 "data_size": 63488 00:10:44.285 }, 00:10:44.285 { 00:10:44.285 "name": "BaseBdev3", 00:10:44.285 "uuid": "b1b7044b-4afc-47cd-b04b-f68f14cce9f3", 00:10:44.285 "is_configured": true, 00:10:44.285 "data_offset": 2048, 00:10:44.285 "data_size": 63488 00:10:44.285 }, 00:10:44.285 { 00:10:44.285 "name": "BaseBdev4", 00:10:44.285 "uuid": "87f83fb7-ddb1-4784-9960-4786c4ce794a", 00:10:44.285 "is_configured": true, 00:10:44.285 "data_offset": 2048, 00:10:44.285 "data_size": 63488 00:10:44.285 } 00:10:44.285 ] 00:10:44.285 }' 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.285 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.851 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:44.851 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.851 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.851 [2024-11-15 10:39:05.853473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.852 "name": "Existed_Raid", 00:10:44.852 "uuid": "f6f543a6-9e5c-492f-8bc9-97844465b205", 00:10:44.852 "strip_size_kb": 64, 00:10:44.852 "state": "configuring", 00:10:44.852 "raid_level": "raid0", 00:10:44.852 "superblock": true, 00:10:44.852 "num_base_bdevs": 4, 00:10:44.852 "num_base_bdevs_discovered": 2, 00:10:44.852 "num_base_bdevs_operational": 4, 00:10:44.852 "base_bdevs_list": [ 00:10:44.852 { 00:10:44.852 "name": "BaseBdev1", 00:10:44.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.852 "is_configured": false, 00:10:44.852 "data_offset": 0, 00:10:44.852 "data_size": 0 00:10:44.852 }, 00:10:44.852 { 00:10:44.852 "name": null, 00:10:44.852 "uuid": "bf17ec8b-c648-4bd9-9fb1-ee3d12a9b8dc", 00:10:44.852 "is_configured": false, 00:10:44.852 "data_offset": 0, 00:10:44.852 "data_size": 63488 00:10:44.852 }, 00:10:44.852 { 00:10:44.852 "name": "BaseBdev3", 00:10:44.852 "uuid": "b1b7044b-4afc-47cd-b04b-f68f14cce9f3", 00:10:44.852 "is_configured": true, 00:10:44.852 "data_offset": 2048, 00:10:44.852 "data_size": 63488 00:10:44.852 }, 00:10:44.852 { 00:10:44.852 "name": "BaseBdev4", 00:10:44.852 "uuid": "87f83fb7-ddb1-4784-9960-4786c4ce794a", 00:10:44.852 "is_configured": true, 00:10:44.852 "data_offset": 2048, 00:10:44.852 "data_size": 63488 00:10:44.852 } 00:10:44.852 ] 00:10:44.852 }' 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.852 10:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.418 [2024-11-15 10:39:06.451642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.418 BaseBdev1 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.418 [ 00:10:45.418 { 00:10:45.418 "name": "BaseBdev1", 00:10:45.418 "aliases": [ 00:10:45.418 "5276e10c-e190-40de-9c22-ed891c7bfe3c" 00:10:45.418 ], 00:10:45.418 "product_name": "Malloc disk", 00:10:45.418 "block_size": 512, 00:10:45.418 "num_blocks": 65536, 00:10:45.418 "uuid": "5276e10c-e190-40de-9c22-ed891c7bfe3c", 00:10:45.418 "assigned_rate_limits": { 00:10:45.418 "rw_ios_per_sec": 0, 00:10:45.418 "rw_mbytes_per_sec": 0, 00:10:45.418 "r_mbytes_per_sec": 0, 00:10:45.418 "w_mbytes_per_sec": 0 00:10:45.418 }, 00:10:45.418 "claimed": true, 00:10:45.418 "claim_type": "exclusive_write", 00:10:45.418 "zoned": false, 00:10:45.418 "supported_io_types": { 00:10:45.418 "read": true, 00:10:45.418 "write": true, 00:10:45.418 "unmap": true, 00:10:45.418 "flush": true, 00:10:45.418 "reset": true, 00:10:45.418 "nvme_admin": false, 00:10:45.418 "nvme_io": false, 00:10:45.418 "nvme_io_md": false, 00:10:45.418 "write_zeroes": true, 00:10:45.418 "zcopy": true, 00:10:45.418 "get_zone_info": false, 00:10:45.418 "zone_management": false, 00:10:45.418 "zone_append": false, 00:10:45.418 "compare": false, 00:10:45.418 "compare_and_write": false, 00:10:45.418 "abort": true, 00:10:45.418 "seek_hole": false, 00:10:45.418 "seek_data": false, 00:10:45.418 "copy": true, 00:10:45.418 "nvme_iov_md": false 00:10:45.418 }, 00:10:45.418 "memory_domains": [ 00:10:45.418 { 00:10:45.418 "dma_device_id": "system", 00:10:45.418 "dma_device_type": 1 00:10:45.418 }, 00:10:45.418 { 00:10:45.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.418 "dma_device_type": 2 00:10:45.418 } 00:10:45.418 ], 00:10:45.418 "driver_specific": {} 00:10:45.418 } 00:10:45.418 ] 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.418 "name": "Existed_Raid", 00:10:45.418 "uuid": "f6f543a6-9e5c-492f-8bc9-97844465b205", 00:10:45.418 "strip_size_kb": 64, 00:10:45.418 "state": "configuring", 00:10:45.418 "raid_level": "raid0", 00:10:45.418 "superblock": true, 00:10:45.418 "num_base_bdevs": 4, 00:10:45.418 "num_base_bdevs_discovered": 3, 00:10:45.418 "num_base_bdevs_operational": 4, 00:10:45.418 "base_bdevs_list": [ 00:10:45.418 { 00:10:45.418 "name": "BaseBdev1", 00:10:45.418 "uuid": "5276e10c-e190-40de-9c22-ed891c7bfe3c", 00:10:45.418 "is_configured": true, 00:10:45.418 "data_offset": 2048, 00:10:45.418 "data_size": 63488 00:10:45.418 }, 00:10:45.418 { 00:10:45.418 "name": null, 00:10:45.418 "uuid": "bf17ec8b-c648-4bd9-9fb1-ee3d12a9b8dc", 00:10:45.418 "is_configured": false, 00:10:45.418 "data_offset": 0, 00:10:45.418 "data_size": 63488 00:10:45.418 }, 00:10:45.418 { 00:10:45.418 "name": "BaseBdev3", 00:10:45.418 "uuid": "b1b7044b-4afc-47cd-b04b-f68f14cce9f3", 00:10:45.418 "is_configured": true, 00:10:45.418 "data_offset": 2048, 00:10:45.418 "data_size": 63488 00:10:45.418 }, 00:10:45.418 { 00:10:45.418 "name": "BaseBdev4", 00:10:45.418 "uuid": "87f83fb7-ddb1-4784-9960-4786c4ce794a", 00:10:45.418 "is_configured": true, 00:10:45.418 "data_offset": 2048, 00:10:45.418 "data_size": 63488 00:10:45.418 } 00:10:45.418 ] 00:10:45.418 }' 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.418 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.984 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.984 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.984 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.984 10:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.984 10:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.984 [2024-11-15 10:39:07.031837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.984 "name": "Existed_Raid", 00:10:45.984 "uuid": "f6f543a6-9e5c-492f-8bc9-97844465b205", 00:10:45.984 "strip_size_kb": 64, 00:10:45.984 "state": "configuring", 00:10:45.984 "raid_level": "raid0", 00:10:45.984 "superblock": true, 00:10:45.984 "num_base_bdevs": 4, 00:10:45.984 "num_base_bdevs_discovered": 2, 00:10:45.984 "num_base_bdevs_operational": 4, 00:10:45.984 "base_bdevs_list": [ 00:10:45.984 { 00:10:45.984 "name": "BaseBdev1", 00:10:45.984 "uuid": "5276e10c-e190-40de-9c22-ed891c7bfe3c", 00:10:45.984 "is_configured": true, 00:10:45.984 "data_offset": 2048, 00:10:45.984 "data_size": 63488 00:10:45.984 }, 00:10:45.984 { 00:10:45.984 "name": null, 00:10:45.984 "uuid": "bf17ec8b-c648-4bd9-9fb1-ee3d12a9b8dc", 00:10:45.984 "is_configured": false, 00:10:45.984 "data_offset": 0, 00:10:45.984 "data_size": 63488 00:10:45.984 }, 00:10:45.984 { 00:10:45.984 "name": null, 00:10:45.984 "uuid": "b1b7044b-4afc-47cd-b04b-f68f14cce9f3", 00:10:45.984 "is_configured": false, 00:10:45.984 "data_offset": 0, 00:10:45.984 "data_size": 63488 00:10:45.984 }, 00:10:45.984 { 00:10:45.984 "name": "BaseBdev4", 00:10:45.984 "uuid": "87f83fb7-ddb1-4784-9960-4786c4ce794a", 00:10:45.984 "is_configured": true, 00:10:45.984 "data_offset": 2048, 00:10:45.984 "data_size": 63488 00:10:45.984 } 00:10:45.984 ] 00:10:45.984 }' 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.984 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.550 [2024-11-15 10:39:07.591996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.550 "name": "Existed_Raid", 00:10:46.550 "uuid": "f6f543a6-9e5c-492f-8bc9-97844465b205", 00:10:46.550 "strip_size_kb": 64, 00:10:46.550 "state": "configuring", 00:10:46.550 "raid_level": "raid0", 00:10:46.550 "superblock": true, 00:10:46.550 "num_base_bdevs": 4, 00:10:46.550 "num_base_bdevs_discovered": 3, 00:10:46.550 "num_base_bdevs_operational": 4, 00:10:46.550 "base_bdevs_list": [ 00:10:46.550 { 00:10:46.550 "name": "BaseBdev1", 00:10:46.550 "uuid": "5276e10c-e190-40de-9c22-ed891c7bfe3c", 00:10:46.550 "is_configured": true, 00:10:46.550 "data_offset": 2048, 00:10:46.550 "data_size": 63488 00:10:46.550 }, 00:10:46.550 { 00:10:46.550 "name": null, 00:10:46.550 "uuid": "bf17ec8b-c648-4bd9-9fb1-ee3d12a9b8dc", 00:10:46.550 "is_configured": false, 00:10:46.550 "data_offset": 0, 00:10:46.550 "data_size": 63488 00:10:46.550 }, 00:10:46.550 { 00:10:46.550 "name": "BaseBdev3", 00:10:46.550 "uuid": "b1b7044b-4afc-47cd-b04b-f68f14cce9f3", 00:10:46.550 "is_configured": true, 00:10:46.550 "data_offset": 2048, 00:10:46.550 "data_size": 63488 00:10:46.550 }, 00:10:46.550 { 00:10:46.550 "name": "BaseBdev4", 00:10:46.550 "uuid": "87f83fb7-ddb1-4784-9960-4786c4ce794a", 00:10:46.550 "is_configured": true, 00:10:46.550 "data_offset": 2048, 00:10:46.550 "data_size": 63488 00:10:46.550 } 00:10:46.550 ] 00:10:46.550 }' 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.550 10:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.117 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:47.117 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.117 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.117 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.117 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.118 [2024-11-15 10:39:08.168246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.118 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.377 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.377 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.377 "name": "Existed_Raid", 00:10:47.377 "uuid": "f6f543a6-9e5c-492f-8bc9-97844465b205", 00:10:47.377 "strip_size_kb": 64, 00:10:47.377 "state": "configuring", 00:10:47.377 "raid_level": "raid0", 00:10:47.377 "superblock": true, 00:10:47.377 "num_base_bdevs": 4, 00:10:47.377 "num_base_bdevs_discovered": 2, 00:10:47.377 "num_base_bdevs_operational": 4, 00:10:47.377 "base_bdevs_list": [ 00:10:47.377 { 00:10:47.377 "name": null, 00:10:47.377 "uuid": "5276e10c-e190-40de-9c22-ed891c7bfe3c", 00:10:47.377 "is_configured": false, 00:10:47.377 "data_offset": 0, 00:10:47.377 "data_size": 63488 00:10:47.377 }, 00:10:47.377 { 00:10:47.377 "name": null, 00:10:47.377 "uuid": "bf17ec8b-c648-4bd9-9fb1-ee3d12a9b8dc", 00:10:47.377 "is_configured": false, 00:10:47.377 "data_offset": 0, 00:10:47.377 "data_size": 63488 00:10:47.377 }, 00:10:47.377 { 00:10:47.377 "name": "BaseBdev3", 00:10:47.377 "uuid": "b1b7044b-4afc-47cd-b04b-f68f14cce9f3", 00:10:47.377 "is_configured": true, 00:10:47.377 "data_offset": 2048, 00:10:47.377 "data_size": 63488 00:10:47.377 }, 00:10:47.377 { 00:10:47.377 "name": "BaseBdev4", 00:10:47.377 "uuid": "87f83fb7-ddb1-4784-9960-4786c4ce794a", 00:10:47.377 "is_configured": true, 00:10:47.377 "data_offset": 2048, 00:10:47.377 "data_size": 63488 00:10:47.377 } 00:10:47.377 ] 00:10:47.377 }' 00:10:47.377 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.377 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.943 [2024-11-15 10:39:08.882990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.943 "name": "Existed_Raid", 00:10:47.943 "uuid": "f6f543a6-9e5c-492f-8bc9-97844465b205", 00:10:47.943 "strip_size_kb": 64, 00:10:47.943 "state": "configuring", 00:10:47.943 "raid_level": "raid0", 00:10:47.943 "superblock": true, 00:10:47.943 "num_base_bdevs": 4, 00:10:47.943 "num_base_bdevs_discovered": 3, 00:10:47.943 "num_base_bdevs_operational": 4, 00:10:47.943 "base_bdevs_list": [ 00:10:47.943 { 00:10:47.943 "name": null, 00:10:47.943 "uuid": "5276e10c-e190-40de-9c22-ed891c7bfe3c", 00:10:47.943 "is_configured": false, 00:10:47.943 "data_offset": 0, 00:10:47.943 "data_size": 63488 00:10:47.943 }, 00:10:47.943 { 00:10:47.943 "name": "BaseBdev2", 00:10:47.943 "uuid": "bf17ec8b-c648-4bd9-9fb1-ee3d12a9b8dc", 00:10:47.943 "is_configured": true, 00:10:47.943 "data_offset": 2048, 00:10:47.943 "data_size": 63488 00:10:47.943 }, 00:10:47.943 { 00:10:47.943 "name": "BaseBdev3", 00:10:47.943 "uuid": "b1b7044b-4afc-47cd-b04b-f68f14cce9f3", 00:10:47.943 "is_configured": true, 00:10:47.943 "data_offset": 2048, 00:10:47.943 "data_size": 63488 00:10:47.943 }, 00:10:47.943 { 00:10:47.943 "name": "BaseBdev4", 00:10:47.943 "uuid": "87f83fb7-ddb1-4784-9960-4786c4ce794a", 00:10:47.943 "is_configured": true, 00:10:47.943 "data_offset": 2048, 00:10:47.943 "data_size": 63488 00:10:47.943 } 00:10:47.943 ] 00:10:47.943 }' 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.943 10:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5276e10c-e190-40de-9c22-ed891c7bfe3c 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.528 NewBaseBdev 00:10:48.528 [2024-11-15 10:39:09.549600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:48.528 [2024-11-15 10:39:09.549919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:48.528 [2024-11-15 10:39:09.549953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:48.528 [2024-11-15 10:39:09.550273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:48.528 [2024-11-15 10:39:09.550452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:48.528 [2024-11-15 10:39:09.550474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:48.528 [2024-11-15 10:39:09.550673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.528 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.529 [ 00:10:48.529 { 00:10:48.529 "name": "NewBaseBdev", 00:10:48.529 "aliases": [ 00:10:48.529 "5276e10c-e190-40de-9c22-ed891c7bfe3c" 00:10:48.529 ], 00:10:48.529 "product_name": "Malloc disk", 00:10:48.529 "block_size": 512, 00:10:48.529 "num_blocks": 65536, 00:10:48.529 "uuid": "5276e10c-e190-40de-9c22-ed891c7bfe3c", 00:10:48.529 "assigned_rate_limits": { 00:10:48.529 "rw_ios_per_sec": 0, 00:10:48.529 "rw_mbytes_per_sec": 0, 00:10:48.529 "r_mbytes_per_sec": 0, 00:10:48.529 "w_mbytes_per_sec": 0 00:10:48.529 }, 00:10:48.529 "claimed": true, 00:10:48.529 "claim_type": "exclusive_write", 00:10:48.529 "zoned": false, 00:10:48.529 "supported_io_types": { 00:10:48.529 "read": true, 00:10:48.529 "write": true, 00:10:48.529 "unmap": true, 00:10:48.529 "flush": true, 00:10:48.529 "reset": true, 00:10:48.529 "nvme_admin": false, 00:10:48.529 "nvme_io": false, 00:10:48.529 "nvme_io_md": false, 00:10:48.529 "write_zeroes": true, 00:10:48.529 "zcopy": true, 00:10:48.529 "get_zone_info": false, 00:10:48.529 "zone_management": false, 00:10:48.529 "zone_append": false, 00:10:48.529 "compare": false, 00:10:48.529 "compare_and_write": false, 00:10:48.529 "abort": true, 00:10:48.529 "seek_hole": false, 00:10:48.529 "seek_data": false, 00:10:48.529 "copy": true, 00:10:48.529 "nvme_iov_md": false 00:10:48.529 }, 00:10:48.529 "memory_domains": [ 00:10:48.529 { 00:10:48.529 "dma_device_id": "system", 00:10:48.529 "dma_device_type": 1 00:10:48.529 }, 00:10:48.529 { 00:10:48.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.529 "dma_device_type": 2 00:10:48.529 } 00:10:48.529 ], 00:10:48.529 "driver_specific": {} 00:10:48.529 } 00:10:48.529 ] 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.529 "name": "Existed_Raid", 00:10:48.529 "uuid": "f6f543a6-9e5c-492f-8bc9-97844465b205", 00:10:48.529 "strip_size_kb": 64, 00:10:48.529 "state": "online", 00:10:48.529 "raid_level": "raid0", 00:10:48.529 "superblock": true, 00:10:48.529 "num_base_bdevs": 4, 00:10:48.529 "num_base_bdevs_discovered": 4, 00:10:48.529 "num_base_bdevs_operational": 4, 00:10:48.529 "base_bdevs_list": [ 00:10:48.529 { 00:10:48.529 "name": "NewBaseBdev", 00:10:48.529 "uuid": "5276e10c-e190-40de-9c22-ed891c7bfe3c", 00:10:48.529 "is_configured": true, 00:10:48.529 "data_offset": 2048, 00:10:48.529 "data_size": 63488 00:10:48.529 }, 00:10:48.529 { 00:10:48.529 "name": "BaseBdev2", 00:10:48.529 "uuid": "bf17ec8b-c648-4bd9-9fb1-ee3d12a9b8dc", 00:10:48.529 "is_configured": true, 00:10:48.529 "data_offset": 2048, 00:10:48.529 "data_size": 63488 00:10:48.529 }, 00:10:48.529 { 00:10:48.529 "name": "BaseBdev3", 00:10:48.529 "uuid": "b1b7044b-4afc-47cd-b04b-f68f14cce9f3", 00:10:48.529 "is_configured": true, 00:10:48.529 "data_offset": 2048, 00:10:48.529 "data_size": 63488 00:10:48.529 }, 00:10:48.529 { 00:10:48.529 "name": "BaseBdev4", 00:10:48.529 "uuid": "87f83fb7-ddb1-4784-9960-4786c4ce794a", 00:10:48.529 "is_configured": true, 00:10:48.529 "data_offset": 2048, 00:10:48.529 "data_size": 63488 00:10:48.529 } 00:10:48.529 ] 00:10:48.529 }' 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.529 10:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.127 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:49.127 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:49.127 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.127 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.127 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.127 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.127 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.128 [2024-11-15 10:39:10.078227] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.128 "name": "Existed_Raid", 00:10:49.128 "aliases": [ 00:10:49.128 "f6f543a6-9e5c-492f-8bc9-97844465b205" 00:10:49.128 ], 00:10:49.128 "product_name": "Raid Volume", 00:10:49.128 "block_size": 512, 00:10:49.128 "num_blocks": 253952, 00:10:49.128 "uuid": "f6f543a6-9e5c-492f-8bc9-97844465b205", 00:10:49.128 "assigned_rate_limits": { 00:10:49.128 "rw_ios_per_sec": 0, 00:10:49.128 "rw_mbytes_per_sec": 0, 00:10:49.128 "r_mbytes_per_sec": 0, 00:10:49.128 "w_mbytes_per_sec": 0 00:10:49.128 }, 00:10:49.128 "claimed": false, 00:10:49.128 "zoned": false, 00:10:49.128 "supported_io_types": { 00:10:49.128 "read": true, 00:10:49.128 "write": true, 00:10:49.128 "unmap": true, 00:10:49.128 "flush": true, 00:10:49.128 "reset": true, 00:10:49.128 "nvme_admin": false, 00:10:49.128 "nvme_io": false, 00:10:49.128 "nvme_io_md": false, 00:10:49.128 "write_zeroes": true, 00:10:49.128 "zcopy": false, 00:10:49.128 "get_zone_info": false, 00:10:49.128 "zone_management": false, 00:10:49.128 "zone_append": false, 00:10:49.128 "compare": false, 00:10:49.128 "compare_and_write": false, 00:10:49.128 "abort": false, 00:10:49.128 "seek_hole": false, 00:10:49.128 "seek_data": false, 00:10:49.128 "copy": false, 00:10:49.128 "nvme_iov_md": false 00:10:49.128 }, 00:10:49.128 "memory_domains": [ 00:10:49.128 { 00:10:49.128 "dma_device_id": "system", 00:10:49.128 "dma_device_type": 1 00:10:49.128 }, 00:10:49.128 { 00:10:49.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.128 "dma_device_type": 2 00:10:49.128 }, 00:10:49.128 { 00:10:49.128 "dma_device_id": "system", 00:10:49.128 "dma_device_type": 1 00:10:49.128 }, 00:10:49.128 { 00:10:49.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.128 "dma_device_type": 2 00:10:49.128 }, 00:10:49.128 { 00:10:49.128 "dma_device_id": "system", 00:10:49.128 "dma_device_type": 1 00:10:49.128 }, 00:10:49.128 { 00:10:49.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.128 "dma_device_type": 2 00:10:49.128 }, 00:10:49.128 { 00:10:49.128 "dma_device_id": "system", 00:10:49.128 "dma_device_type": 1 00:10:49.128 }, 00:10:49.128 { 00:10:49.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.128 "dma_device_type": 2 00:10:49.128 } 00:10:49.128 ], 00:10:49.128 "driver_specific": { 00:10:49.128 "raid": { 00:10:49.128 "uuid": "f6f543a6-9e5c-492f-8bc9-97844465b205", 00:10:49.128 "strip_size_kb": 64, 00:10:49.128 "state": "online", 00:10:49.128 "raid_level": "raid0", 00:10:49.128 "superblock": true, 00:10:49.128 "num_base_bdevs": 4, 00:10:49.128 "num_base_bdevs_discovered": 4, 00:10:49.128 "num_base_bdevs_operational": 4, 00:10:49.128 "base_bdevs_list": [ 00:10:49.128 { 00:10:49.128 "name": "NewBaseBdev", 00:10:49.128 "uuid": "5276e10c-e190-40de-9c22-ed891c7bfe3c", 00:10:49.128 "is_configured": true, 00:10:49.128 "data_offset": 2048, 00:10:49.128 "data_size": 63488 00:10:49.128 }, 00:10:49.128 { 00:10:49.128 "name": "BaseBdev2", 00:10:49.128 "uuid": "bf17ec8b-c648-4bd9-9fb1-ee3d12a9b8dc", 00:10:49.128 "is_configured": true, 00:10:49.128 "data_offset": 2048, 00:10:49.128 "data_size": 63488 00:10:49.128 }, 00:10:49.128 { 00:10:49.128 "name": "BaseBdev3", 00:10:49.128 "uuid": "b1b7044b-4afc-47cd-b04b-f68f14cce9f3", 00:10:49.128 "is_configured": true, 00:10:49.128 "data_offset": 2048, 00:10:49.128 "data_size": 63488 00:10:49.128 }, 00:10:49.128 { 00:10:49.128 "name": "BaseBdev4", 00:10:49.128 "uuid": "87f83fb7-ddb1-4784-9960-4786c4ce794a", 00:10:49.128 "is_configured": true, 00:10:49.128 "data_offset": 2048, 00:10:49.128 "data_size": 63488 00:10:49.128 } 00:10:49.128 ] 00:10:49.128 } 00:10:49.128 } 00:10:49.128 }' 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:49.128 BaseBdev2 00:10:49.128 BaseBdev3 00:10:49.128 BaseBdev4' 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.128 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.387 [2024-11-15 10:39:10.433842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.387 [2024-11-15 10:39:10.434003] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.387 [2024-11-15 10:39:10.434116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.387 [2024-11-15 10:39:10.434207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.387 [2024-11-15 10:39:10.434223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70123 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70123 ']' 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70123 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70123 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.387 killing process with pid 70123 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70123' 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70123 00:10:49.387 [2024-11-15 10:39:10.472271] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.387 10:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70123 00:10:49.954 [2024-11-15 10:39:10.826361] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.887 ************************************ 00:10:50.887 END TEST raid_state_function_test_sb 00:10:50.887 ************************************ 00:10:50.887 10:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:50.887 00:10:50.887 real 0m12.816s 00:10:50.887 user 0m21.275s 00:10:50.887 sys 0m1.787s 00:10:50.887 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.887 10:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.887 10:39:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:50.887 10:39:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:50.887 10:39:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.887 10:39:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.887 ************************************ 00:10:50.887 START TEST raid_superblock_test 00:10:50.887 ************************************ 00:10:50.887 10:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:50.887 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:50.887 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:50.887 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:50.887 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:50.887 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70807 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70807 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70807 ']' 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.888 10:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.888 [2024-11-15 10:39:12.028484] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:10:50.888 [2024-11-15 10:39:12.029224] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70807 ] 00:10:51.147 [2024-11-15 10:39:12.213800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.405 [2024-11-15 10:39:12.342650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.405 [2024-11-15 10:39:12.546953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.405 [2024-11-15 10:39:12.547035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.973 malloc1 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.973 10:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.973 [2024-11-15 10:39:13.006262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:51.973 [2024-11-15 10:39:13.006485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.973 [2024-11-15 10:39:13.006678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:51.973 [2024-11-15 10:39:13.006842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.973 [2024-11-15 10:39:13.009905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.973 [2024-11-15 10:39:13.010078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:51.973 pt1 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.973 malloc2 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.973 [2024-11-15 10:39:13.063171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:51.973 [2024-11-15 10:39:13.063256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.973 [2024-11-15 10:39:13.063292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:51.973 [2024-11-15 10:39:13.063307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.973 [2024-11-15 10:39:13.066423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.973 [2024-11-15 10:39:13.066473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:51.973 pt2 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.973 malloc3 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.973 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.973 [2024-11-15 10:39:13.127642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:51.973 [2024-11-15 10:39:13.127716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.973 [2024-11-15 10:39:13.127753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:51.973 [2024-11-15 10:39:13.127770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.973 [2024-11-15 10:39:13.130710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.973 [2024-11-15 10:39:13.130759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:52.232 pt3 00:10:52.232 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.232 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:52.232 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:52.232 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:52.232 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.233 malloc4 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.233 [2024-11-15 10:39:13.185604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:52.233 [2024-11-15 10:39:13.185797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.233 [2024-11-15 10:39:13.185841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:52.233 [2024-11-15 10:39:13.185858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.233 [2024-11-15 10:39:13.188596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.233 [2024-11-15 10:39:13.188642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:52.233 pt4 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.233 [2024-11-15 10:39:13.197648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:52.233 [2024-11-15 10:39:13.200048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:52.233 [2024-11-15 10:39:13.200274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:52.233 [2024-11-15 10:39:13.200399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:52.233 [2024-11-15 10:39:13.200691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:52.233 [2024-11-15 10:39:13.200710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:52.233 [2024-11-15 10:39:13.201097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:52.233 [2024-11-15 10:39:13.201346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:52.233 [2024-11-15 10:39:13.201369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:52.233 [2024-11-15 10:39:13.201646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.233 "name": "raid_bdev1", 00:10:52.233 "uuid": "365a11f6-accc-4b74-ab51-ece25ee50dd6", 00:10:52.233 "strip_size_kb": 64, 00:10:52.233 "state": "online", 00:10:52.233 "raid_level": "raid0", 00:10:52.233 "superblock": true, 00:10:52.233 "num_base_bdevs": 4, 00:10:52.233 "num_base_bdevs_discovered": 4, 00:10:52.233 "num_base_bdevs_operational": 4, 00:10:52.233 "base_bdevs_list": [ 00:10:52.233 { 00:10:52.233 "name": "pt1", 00:10:52.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.233 "is_configured": true, 00:10:52.233 "data_offset": 2048, 00:10:52.233 "data_size": 63488 00:10:52.233 }, 00:10:52.233 { 00:10:52.233 "name": "pt2", 00:10:52.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.233 "is_configured": true, 00:10:52.233 "data_offset": 2048, 00:10:52.233 "data_size": 63488 00:10:52.233 }, 00:10:52.233 { 00:10:52.233 "name": "pt3", 00:10:52.233 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.233 "is_configured": true, 00:10:52.233 "data_offset": 2048, 00:10:52.233 "data_size": 63488 00:10:52.233 }, 00:10:52.233 { 00:10:52.233 "name": "pt4", 00:10:52.233 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.233 "is_configured": true, 00:10:52.233 "data_offset": 2048, 00:10:52.233 "data_size": 63488 00:10:52.233 } 00:10:52.233 ] 00:10:52.233 }' 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.233 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.844 [2024-11-15 10:39:13.730274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:52.844 "name": "raid_bdev1", 00:10:52.844 "aliases": [ 00:10:52.844 "365a11f6-accc-4b74-ab51-ece25ee50dd6" 00:10:52.844 ], 00:10:52.844 "product_name": "Raid Volume", 00:10:52.844 "block_size": 512, 00:10:52.844 "num_blocks": 253952, 00:10:52.844 "uuid": "365a11f6-accc-4b74-ab51-ece25ee50dd6", 00:10:52.844 "assigned_rate_limits": { 00:10:52.844 "rw_ios_per_sec": 0, 00:10:52.844 "rw_mbytes_per_sec": 0, 00:10:52.844 "r_mbytes_per_sec": 0, 00:10:52.844 "w_mbytes_per_sec": 0 00:10:52.844 }, 00:10:52.844 "claimed": false, 00:10:52.844 "zoned": false, 00:10:52.844 "supported_io_types": { 00:10:52.844 "read": true, 00:10:52.844 "write": true, 00:10:52.844 "unmap": true, 00:10:52.844 "flush": true, 00:10:52.844 "reset": true, 00:10:52.844 "nvme_admin": false, 00:10:52.844 "nvme_io": false, 00:10:52.844 "nvme_io_md": false, 00:10:52.844 "write_zeroes": true, 00:10:52.844 "zcopy": false, 00:10:52.844 "get_zone_info": false, 00:10:52.844 "zone_management": false, 00:10:52.844 "zone_append": false, 00:10:52.844 "compare": false, 00:10:52.844 "compare_and_write": false, 00:10:52.844 "abort": false, 00:10:52.844 "seek_hole": false, 00:10:52.844 "seek_data": false, 00:10:52.844 "copy": false, 00:10:52.844 "nvme_iov_md": false 00:10:52.844 }, 00:10:52.844 "memory_domains": [ 00:10:52.844 { 00:10:52.844 "dma_device_id": "system", 00:10:52.844 "dma_device_type": 1 00:10:52.844 }, 00:10:52.844 { 00:10:52.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.844 "dma_device_type": 2 00:10:52.844 }, 00:10:52.844 { 00:10:52.844 "dma_device_id": "system", 00:10:52.844 "dma_device_type": 1 00:10:52.844 }, 00:10:52.844 { 00:10:52.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.844 "dma_device_type": 2 00:10:52.844 }, 00:10:52.844 { 00:10:52.844 "dma_device_id": "system", 00:10:52.844 "dma_device_type": 1 00:10:52.844 }, 00:10:52.844 { 00:10:52.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.844 "dma_device_type": 2 00:10:52.844 }, 00:10:52.844 { 00:10:52.844 "dma_device_id": "system", 00:10:52.844 "dma_device_type": 1 00:10:52.844 }, 00:10:52.844 { 00:10:52.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.844 "dma_device_type": 2 00:10:52.844 } 00:10:52.844 ], 00:10:52.844 "driver_specific": { 00:10:52.844 "raid": { 00:10:52.844 "uuid": "365a11f6-accc-4b74-ab51-ece25ee50dd6", 00:10:52.844 "strip_size_kb": 64, 00:10:52.844 "state": "online", 00:10:52.844 "raid_level": "raid0", 00:10:52.844 "superblock": true, 00:10:52.844 "num_base_bdevs": 4, 00:10:52.844 "num_base_bdevs_discovered": 4, 00:10:52.844 "num_base_bdevs_operational": 4, 00:10:52.844 "base_bdevs_list": [ 00:10:52.844 { 00:10:52.844 "name": "pt1", 00:10:52.844 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.844 "is_configured": true, 00:10:52.844 "data_offset": 2048, 00:10:52.844 "data_size": 63488 00:10:52.844 }, 00:10:52.844 { 00:10:52.844 "name": "pt2", 00:10:52.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.844 "is_configured": true, 00:10:52.844 "data_offset": 2048, 00:10:52.844 "data_size": 63488 00:10:52.844 }, 00:10:52.844 { 00:10:52.844 "name": "pt3", 00:10:52.844 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.844 "is_configured": true, 00:10:52.844 "data_offset": 2048, 00:10:52.844 "data_size": 63488 00:10:52.844 }, 00:10:52.844 { 00:10:52.844 "name": "pt4", 00:10:52.844 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.844 "is_configured": true, 00:10:52.844 "data_offset": 2048, 00:10:52.844 "data_size": 63488 00:10:52.844 } 00:10:52.844 ] 00:10:52.844 } 00:10:52.844 } 00:10:52.844 }' 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:52.844 pt2 00:10:52.844 pt3 00:10:52.844 pt4' 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:52.844 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.845 10:39:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.845 10:39:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.103 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.103 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.103 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.103 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.103 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.103 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:53.103 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.103 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.103 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.103 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.103 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.104 [2024-11-15 10:39:14.106352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=365a11f6-accc-4b74-ab51-ece25ee50dd6 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 365a11f6-accc-4b74-ab51-ece25ee50dd6 ']' 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.104 [2024-11-15 10:39:14.157982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:53.104 [2024-11-15 10:39:14.158015] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.104 [2024-11-15 10:39:14.158124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.104 [2024-11-15 10:39:14.158216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.104 [2024-11-15 10:39:14.158240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.104 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.363 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.363 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:53.363 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.364 [2024-11-15 10:39:14.326067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:53.364 [2024-11-15 10:39:14.328749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:53.364 [2024-11-15 10:39:14.328967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:53.364 [2024-11-15 10:39:14.329162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:53.364 [2024-11-15 10:39:14.329349] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:53.364 [2024-11-15 10:39:14.329567] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:53.364 [2024-11-15 10:39:14.329736] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:53.364 [2024-11-15 10:39:14.329978] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:53.364 [2024-11-15 10:39:14.330188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:53.364 [2024-11-15 10:39:14.330347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:53.364 request: 00:10:53.364 { 00:10:53.364 "name": "raid_bdev1", 00:10:53.364 "raid_level": "raid0", 00:10:53.364 "base_bdevs": [ 00:10:53.364 "malloc1", 00:10:53.364 "malloc2", 00:10:53.364 "malloc3", 00:10:53.364 "malloc4" 00:10:53.364 ], 00:10:53.364 "strip_size_kb": 64, 00:10:53.364 "superblock": false, 00:10:53.364 "method": "bdev_raid_create", 00:10:53.364 "req_id": 1 00:10:53.364 } 00:10:53.364 Got JSON-RPC error response 00:10:53.364 response: 00:10:53.364 { 00:10:53.364 "code": -17, 00:10:53.364 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:53.364 } 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.364 [2024-11-15 10:39:14.390779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:53.364 [2024-11-15 10:39:14.390986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.364 [2024-11-15 10:39:14.391153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:53.364 [2024-11-15 10:39:14.391295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.364 [2024-11-15 10:39:14.394329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.364 [2024-11-15 10:39:14.394507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:53.364 [2024-11-15 10:39:14.394727] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:53.364 [2024-11-15 10:39:14.394934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:53.364 pt1 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.364 "name": "raid_bdev1", 00:10:53.364 "uuid": "365a11f6-accc-4b74-ab51-ece25ee50dd6", 00:10:53.364 "strip_size_kb": 64, 00:10:53.364 "state": "configuring", 00:10:53.364 "raid_level": "raid0", 00:10:53.364 "superblock": true, 00:10:53.364 "num_base_bdevs": 4, 00:10:53.364 "num_base_bdevs_discovered": 1, 00:10:53.364 "num_base_bdevs_operational": 4, 00:10:53.364 "base_bdevs_list": [ 00:10:53.364 { 00:10:53.364 "name": "pt1", 00:10:53.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.364 "is_configured": true, 00:10:53.364 "data_offset": 2048, 00:10:53.364 "data_size": 63488 00:10:53.364 }, 00:10:53.364 { 00:10:53.364 "name": null, 00:10:53.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.364 "is_configured": false, 00:10:53.364 "data_offset": 2048, 00:10:53.364 "data_size": 63488 00:10:53.364 }, 00:10:53.364 { 00:10:53.364 "name": null, 00:10:53.364 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.364 "is_configured": false, 00:10:53.364 "data_offset": 2048, 00:10:53.364 "data_size": 63488 00:10:53.364 }, 00:10:53.364 { 00:10:53.364 "name": null, 00:10:53.364 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.364 "is_configured": false, 00:10:53.364 "data_offset": 2048, 00:10:53.364 "data_size": 63488 00:10:53.364 } 00:10:53.364 ] 00:10:53.364 }' 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.364 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.931 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:53.931 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:53.931 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.931 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.932 [2024-11-15 10:39:14.902991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:53.932 [2024-11-15 10:39:14.903226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.932 [2024-11-15 10:39:14.903411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:53.932 [2024-11-15 10:39:14.903445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.932 [2024-11-15 10:39:14.904033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.932 [2024-11-15 10:39:14.904105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:53.932 [2024-11-15 10:39:14.904209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:53.932 [2024-11-15 10:39:14.904248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:53.932 pt2 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.932 [2024-11-15 10:39:14.910988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.932 "name": "raid_bdev1", 00:10:53.932 "uuid": "365a11f6-accc-4b74-ab51-ece25ee50dd6", 00:10:53.932 "strip_size_kb": 64, 00:10:53.932 "state": "configuring", 00:10:53.932 "raid_level": "raid0", 00:10:53.932 "superblock": true, 00:10:53.932 "num_base_bdevs": 4, 00:10:53.932 "num_base_bdevs_discovered": 1, 00:10:53.932 "num_base_bdevs_operational": 4, 00:10:53.932 "base_bdevs_list": [ 00:10:53.932 { 00:10:53.932 "name": "pt1", 00:10:53.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.932 "is_configured": true, 00:10:53.932 "data_offset": 2048, 00:10:53.932 "data_size": 63488 00:10:53.932 }, 00:10:53.932 { 00:10:53.932 "name": null, 00:10:53.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.932 "is_configured": false, 00:10:53.932 "data_offset": 0, 00:10:53.932 "data_size": 63488 00:10:53.932 }, 00:10:53.932 { 00:10:53.932 "name": null, 00:10:53.932 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.932 "is_configured": false, 00:10:53.932 "data_offset": 2048, 00:10:53.932 "data_size": 63488 00:10:53.932 }, 00:10:53.932 { 00:10:53.932 "name": null, 00:10:53.932 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.932 "is_configured": false, 00:10:53.932 "data_offset": 2048, 00:10:53.932 "data_size": 63488 00:10:53.932 } 00:10:53.932 ] 00:10:53.932 }' 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.932 10:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.501 [2024-11-15 10:39:15.459104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:54.501 [2024-11-15 10:39:15.459181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.501 [2024-11-15 10:39:15.459213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:54.501 [2024-11-15 10:39:15.459228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.501 [2024-11-15 10:39:15.459797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.501 [2024-11-15 10:39:15.459823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:54.501 [2024-11-15 10:39:15.459938] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:54.501 [2024-11-15 10:39:15.459971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:54.501 pt2 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.501 [2024-11-15 10:39:15.471070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:54.501 [2024-11-15 10:39:15.471128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.501 [2024-11-15 10:39:15.471164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:54.501 [2024-11-15 10:39:15.471180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.501 [2024-11-15 10:39:15.471647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.501 [2024-11-15 10:39:15.471678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:54.501 [2024-11-15 10:39:15.471761] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:54.501 [2024-11-15 10:39:15.471795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:54.501 pt3 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.501 [2024-11-15 10:39:15.479052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:54.501 [2024-11-15 10:39:15.479112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.501 [2024-11-15 10:39:15.479142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:54.501 [2024-11-15 10:39:15.479156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.501 [2024-11-15 10:39:15.479623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.501 [2024-11-15 10:39:15.479655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:54.501 [2024-11-15 10:39:15.479735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:54.501 [2024-11-15 10:39:15.479770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:54.501 [2024-11-15 10:39:15.479933] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:54.501 [2024-11-15 10:39:15.479948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:54.501 [2024-11-15 10:39:15.480263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:54.501 [2024-11-15 10:39:15.480454] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:54.501 [2024-11-15 10:39:15.480476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:54.501 [2024-11-15 10:39:15.480668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.501 pt4 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.501 "name": "raid_bdev1", 00:10:54.501 "uuid": "365a11f6-accc-4b74-ab51-ece25ee50dd6", 00:10:54.501 "strip_size_kb": 64, 00:10:54.501 "state": "online", 00:10:54.501 "raid_level": "raid0", 00:10:54.501 "superblock": true, 00:10:54.501 "num_base_bdevs": 4, 00:10:54.501 "num_base_bdevs_discovered": 4, 00:10:54.501 "num_base_bdevs_operational": 4, 00:10:54.501 "base_bdevs_list": [ 00:10:54.501 { 00:10:54.501 "name": "pt1", 00:10:54.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:54.501 "is_configured": true, 00:10:54.501 "data_offset": 2048, 00:10:54.501 "data_size": 63488 00:10:54.501 }, 00:10:54.501 { 00:10:54.501 "name": "pt2", 00:10:54.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:54.501 "is_configured": true, 00:10:54.501 "data_offset": 2048, 00:10:54.501 "data_size": 63488 00:10:54.501 }, 00:10:54.501 { 00:10:54.501 "name": "pt3", 00:10:54.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:54.501 "is_configured": true, 00:10:54.501 "data_offset": 2048, 00:10:54.501 "data_size": 63488 00:10:54.501 }, 00:10:54.501 { 00:10:54.501 "name": "pt4", 00:10:54.501 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:54.501 "is_configured": true, 00:10:54.501 "data_offset": 2048, 00:10:54.501 "data_size": 63488 00:10:54.501 } 00:10:54.501 ] 00:10:54.501 }' 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.501 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.067 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:55.067 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:55.067 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:55.067 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:55.067 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:55.067 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:55.067 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:55.067 10:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:55.067 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.067 10:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.067 [2024-11-15 10:39:15.999645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.067 "name": "raid_bdev1", 00:10:55.067 "aliases": [ 00:10:55.067 "365a11f6-accc-4b74-ab51-ece25ee50dd6" 00:10:55.067 ], 00:10:55.067 "product_name": "Raid Volume", 00:10:55.067 "block_size": 512, 00:10:55.067 "num_blocks": 253952, 00:10:55.067 "uuid": "365a11f6-accc-4b74-ab51-ece25ee50dd6", 00:10:55.067 "assigned_rate_limits": { 00:10:55.067 "rw_ios_per_sec": 0, 00:10:55.067 "rw_mbytes_per_sec": 0, 00:10:55.067 "r_mbytes_per_sec": 0, 00:10:55.067 "w_mbytes_per_sec": 0 00:10:55.067 }, 00:10:55.067 "claimed": false, 00:10:55.067 "zoned": false, 00:10:55.067 "supported_io_types": { 00:10:55.067 "read": true, 00:10:55.067 "write": true, 00:10:55.067 "unmap": true, 00:10:55.067 "flush": true, 00:10:55.067 "reset": true, 00:10:55.067 "nvme_admin": false, 00:10:55.067 "nvme_io": false, 00:10:55.067 "nvme_io_md": false, 00:10:55.067 "write_zeroes": true, 00:10:55.067 "zcopy": false, 00:10:55.067 "get_zone_info": false, 00:10:55.067 "zone_management": false, 00:10:55.067 "zone_append": false, 00:10:55.067 "compare": false, 00:10:55.067 "compare_and_write": false, 00:10:55.067 "abort": false, 00:10:55.067 "seek_hole": false, 00:10:55.067 "seek_data": false, 00:10:55.067 "copy": false, 00:10:55.067 "nvme_iov_md": false 00:10:55.067 }, 00:10:55.067 "memory_domains": [ 00:10:55.067 { 00:10:55.067 "dma_device_id": "system", 00:10:55.067 "dma_device_type": 1 00:10:55.067 }, 00:10:55.067 { 00:10:55.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.067 "dma_device_type": 2 00:10:55.067 }, 00:10:55.067 { 00:10:55.067 "dma_device_id": "system", 00:10:55.067 "dma_device_type": 1 00:10:55.067 }, 00:10:55.067 { 00:10:55.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.067 "dma_device_type": 2 00:10:55.067 }, 00:10:55.067 { 00:10:55.067 "dma_device_id": "system", 00:10:55.067 "dma_device_type": 1 00:10:55.067 }, 00:10:55.067 { 00:10:55.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.067 "dma_device_type": 2 00:10:55.067 }, 00:10:55.067 { 00:10:55.067 "dma_device_id": "system", 00:10:55.067 "dma_device_type": 1 00:10:55.067 }, 00:10:55.067 { 00:10:55.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.067 "dma_device_type": 2 00:10:55.067 } 00:10:55.067 ], 00:10:55.067 "driver_specific": { 00:10:55.067 "raid": { 00:10:55.067 "uuid": "365a11f6-accc-4b74-ab51-ece25ee50dd6", 00:10:55.067 "strip_size_kb": 64, 00:10:55.067 "state": "online", 00:10:55.067 "raid_level": "raid0", 00:10:55.067 "superblock": true, 00:10:55.067 "num_base_bdevs": 4, 00:10:55.067 "num_base_bdevs_discovered": 4, 00:10:55.067 "num_base_bdevs_operational": 4, 00:10:55.067 "base_bdevs_list": [ 00:10:55.067 { 00:10:55.067 "name": "pt1", 00:10:55.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:55.067 "is_configured": true, 00:10:55.067 "data_offset": 2048, 00:10:55.067 "data_size": 63488 00:10:55.067 }, 00:10:55.067 { 00:10:55.067 "name": "pt2", 00:10:55.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.067 "is_configured": true, 00:10:55.067 "data_offset": 2048, 00:10:55.067 "data_size": 63488 00:10:55.067 }, 00:10:55.067 { 00:10:55.067 "name": "pt3", 00:10:55.067 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:55.067 "is_configured": true, 00:10:55.067 "data_offset": 2048, 00:10:55.067 "data_size": 63488 00:10:55.067 }, 00:10:55.067 { 00:10:55.067 "name": "pt4", 00:10:55.067 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:55.067 "is_configured": true, 00:10:55.067 "data_offset": 2048, 00:10:55.067 "data_size": 63488 00:10:55.067 } 00:10:55.067 ] 00:10:55.067 } 00:10:55.067 } 00:10:55.067 }' 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:55.067 pt2 00:10:55.067 pt3 00:10:55.067 pt4' 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.067 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.325 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:55.326 [2024-11-15 10:39:16.375685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 365a11f6-accc-4b74-ab51-ece25ee50dd6 '!=' 365a11f6-accc-4b74-ab51-ece25ee50dd6 ']' 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70807 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70807 ']' 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70807 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70807 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70807' 00:10:55.326 killing process with pid 70807 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70807 00:10:55.326 [2024-11-15 10:39:16.458197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.326 10:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70807 00:10:55.326 [2024-11-15 10:39:16.458312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.326 [2024-11-15 10:39:16.458410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.326 [2024-11-15 10:39:16.458426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:55.891 [2024-11-15 10:39:16.807698] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:56.826 10:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:56.826 00:10:56.826 real 0m5.921s 00:10:56.826 user 0m8.919s 00:10:56.826 sys 0m0.833s 00:10:56.826 10:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.826 ************************************ 00:10:56.826 10:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.826 END TEST raid_superblock_test 00:10:56.826 ************************************ 00:10:56.826 10:39:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:56.826 10:39:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:56.826 10:39:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.826 10:39:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:56.826 ************************************ 00:10:56.826 START TEST raid_read_error_test 00:10:56.826 ************************************ 00:10:56.826 10:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.r02cArvF2s 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71071 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71071 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71071 ']' 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.827 10:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.101 [2024-11-15 10:39:17.995987] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:10:57.102 [2024-11-15 10:39:17.996142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71071 ] 00:10:57.102 [2024-11-15 10:39:18.171284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.359 [2024-11-15 10:39:18.311295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.617 [2024-11-15 10:39:18.520219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.617 [2024-11-15 10:39:18.520403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.874 10:39:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.874 10:39:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:57.874 10:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.874 10:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:57.874 10:39:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.874 10:39:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.874 BaseBdev1_malloc 00:10:57.874 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.874 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:57.875 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.875 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.133 true 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.133 [2024-11-15 10:39:19.047316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:58.133 [2024-11-15 10:39:19.047535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.133 [2024-11-15 10:39:19.047612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:58.133 [2024-11-15 10:39:19.047840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.133 [2024-11-15 10:39:19.050680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.133 [2024-11-15 10:39:19.050733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:58.133 BaseBdev1 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.133 BaseBdev2_malloc 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.133 true 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.133 [2024-11-15 10:39:19.115533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:58.133 [2024-11-15 10:39:19.115727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.133 [2024-11-15 10:39:19.115764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:58.133 [2024-11-15 10:39:19.115783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.133 [2024-11-15 10:39:19.118547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.133 [2024-11-15 10:39:19.118596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:58.133 BaseBdev2 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.133 BaseBdev3_malloc 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.133 true 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.133 [2024-11-15 10:39:19.198143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:58.133 [2024-11-15 10:39:19.198223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.133 [2024-11-15 10:39:19.198252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:58.133 [2024-11-15 10:39:19.198270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.133 [2024-11-15 10:39:19.201083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.133 [2024-11-15 10:39:19.201289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:58.133 BaseBdev3 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.133 BaseBdev4_malloc 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.133 true 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.133 [2024-11-15 10:39:19.253838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:58.133 [2024-11-15 10:39:19.254027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.133 [2024-11-15 10:39:19.254064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:58.133 [2024-11-15 10:39:19.254083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.133 [2024-11-15 10:39:19.256857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.133 [2024-11-15 10:39:19.256911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:58.133 BaseBdev4 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.133 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.133 [2024-11-15 10:39:19.261913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.133 [2024-11-15 10:39:19.264322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.133 [2024-11-15 10:39:19.264581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.133 [2024-11-15 10:39:19.264699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:58.133 [2024-11-15 10:39:19.265009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:58.133 [2024-11-15 10:39:19.265038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:58.134 [2024-11-15 10:39:19.265355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:58.134 [2024-11-15 10:39:19.265608] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:58.134 [2024-11-15 10:39:19.265629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:58.134 [2024-11-15 10:39:19.265877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.134 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.392 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.392 "name": "raid_bdev1", 00:10:58.392 "uuid": "f05fd1ca-6f70-4837-96c0-f800647efad4", 00:10:58.392 "strip_size_kb": 64, 00:10:58.392 "state": "online", 00:10:58.392 "raid_level": "raid0", 00:10:58.392 "superblock": true, 00:10:58.392 "num_base_bdevs": 4, 00:10:58.392 "num_base_bdevs_discovered": 4, 00:10:58.392 "num_base_bdevs_operational": 4, 00:10:58.392 "base_bdevs_list": [ 00:10:58.392 { 00:10:58.392 "name": "BaseBdev1", 00:10:58.392 "uuid": "276d69ae-25e4-5216-a25a-d7350b6b615d", 00:10:58.392 "is_configured": true, 00:10:58.392 "data_offset": 2048, 00:10:58.392 "data_size": 63488 00:10:58.392 }, 00:10:58.392 { 00:10:58.392 "name": "BaseBdev2", 00:10:58.392 "uuid": "c25752bf-6eb6-5689-bb4b-2e588dc01533", 00:10:58.392 "is_configured": true, 00:10:58.392 "data_offset": 2048, 00:10:58.392 "data_size": 63488 00:10:58.392 }, 00:10:58.392 { 00:10:58.392 "name": "BaseBdev3", 00:10:58.392 "uuid": "d9798307-3b30-56a0-936c-284f096aaa3a", 00:10:58.392 "is_configured": true, 00:10:58.392 "data_offset": 2048, 00:10:58.392 "data_size": 63488 00:10:58.392 }, 00:10:58.392 { 00:10:58.392 "name": "BaseBdev4", 00:10:58.392 "uuid": "a8d2e2b8-c1bd-5306-8efb-b93d6309c0a9", 00:10:58.392 "is_configured": true, 00:10:58.392 "data_offset": 2048, 00:10:58.392 "data_size": 63488 00:10:58.392 } 00:10:58.392 ] 00:10:58.392 }' 00:10:58.392 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.392 10:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.650 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:58.650 10:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:58.907 [2024-11-15 10:39:19.895557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.841 "name": "raid_bdev1", 00:10:59.841 "uuid": "f05fd1ca-6f70-4837-96c0-f800647efad4", 00:10:59.841 "strip_size_kb": 64, 00:10:59.841 "state": "online", 00:10:59.841 "raid_level": "raid0", 00:10:59.841 "superblock": true, 00:10:59.841 "num_base_bdevs": 4, 00:10:59.841 "num_base_bdevs_discovered": 4, 00:10:59.841 "num_base_bdevs_operational": 4, 00:10:59.841 "base_bdevs_list": [ 00:10:59.841 { 00:10:59.841 "name": "BaseBdev1", 00:10:59.841 "uuid": "276d69ae-25e4-5216-a25a-d7350b6b615d", 00:10:59.841 "is_configured": true, 00:10:59.841 "data_offset": 2048, 00:10:59.841 "data_size": 63488 00:10:59.841 }, 00:10:59.841 { 00:10:59.841 "name": "BaseBdev2", 00:10:59.841 "uuid": "c25752bf-6eb6-5689-bb4b-2e588dc01533", 00:10:59.841 "is_configured": true, 00:10:59.841 "data_offset": 2048, 00:10:59.841 "data_size": 63488 00:10:59.841 }, 00:10:59.841 { 00:10:59.841 "name": "BaseBdev3", 00:10:59.841 "uuid": "d9798307-3b30-56a0-936c-284f096aaa3a", 00:10:59.841 "is_configured": true, 00:10:59.841 "data_offset": 2048, 00:10:59.841 "data_size": 63488 00:10:59.841 }, 00:10:59.841 { 00:10:59.841 "name": "BaseBdev4", 00:10:59.841 "uuid": "a8d2e2b8-c1bd-5306-8efb-b93d6309c0a9", 00:10:59.841 "is_configured": true, 00:10:59.841 "data_offset": 2048, 00:10:59.841 "data_size": 63488 00:10:59.841 } 00:10:59.841 ] 00:10:59.841 }' 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.841 10:39:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.407 [2024-11-15 10:39:21.306441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.407 [2024-11-15 10:39:21.306639] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.407 [2024-11-15 10:39:21.310096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.407 [2024-11-15 10:39:21.310303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.407 [2024-11-15 10:39:21.310426] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.407 [2024-11-15 10:39:21.310623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:00.407 { 00:11:00.407 "results": [ 00:11:00.407 { 00:11:00.407 "job": "raid_bdev1", 00:11:00.407 "core_mask": "0x1", 00:11:00.407 "workload": "randrw", 00:11:00.407 "percentage": 50, 00:11:00.407 "status": "finished", 00:11:00.407 "queue_depth": 1, 00:11:00.407 "io_size": 131072, 00:11:00.407 "runtime": 1.408722, 00:11:00.407 "iops": 10765.786294244002, 00:11:00.407 "mibps": 1345.7232867805003, 00:11:00.407 "io_failed": 1, 00:11:00.407 "io_timeout": 0, 00:11:00.407 "avg_latency_us": 129.25405275808123, 00:11:00.407 "min_latency_us": 40.72727272727273, 00:11:00.407 "max_latency_us": 1817.1345454545456 00:11:00.407 } 00:11:00.407 ], 00:11:00.407 "core_count": 1 00:11:00.407 } 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71071 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71071 ']' 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71071 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71071 00:11:00.407 killing process with pid 71071 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71071' 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71071 00:11:00.407 [2024-11-15 10:39:21.343470] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.407 10:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71071 00:11:00.666 [2024-11-15 10:39:21.628232] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.602 10:39:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:01.602 10:39:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.r02cArvF2s 00:11:01.602 10:39:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:01.602 10:39:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:01.602 10:39:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:01.602 10:39:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.602 10:39:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.602 10:39:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:01.602 00:11:01.602 real 0m4.835s 00:11:01.602 user 0m5.973s 00:11:01.602 sys 0m0.557s 00:11:01.602 ************************************ 00:11:01.602 END TEST raid_read_error_test 00:11:01.602 ************************************ 00:11:01.602 10:39:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.602 10:39:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.860 10:39:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:01.860 10:39:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:01.860 10:39:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.860 10:39:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.860 ************************************ 00:11:01.860 START TEST raid_write_error_test 00:11:01.860 ************************************ 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.N5cjRtY9Hi 00:11:01.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71219 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71219 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71219 ']' 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.860 10:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.860 [2024-11-15 10:39:22.908539] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:11:01.860 [2024-11-15 10:39:22.908910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71219 ] 00:11:02.118 [2024-11-15 10:39:23.095328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.118 [2024-11-15 10:39:23.227705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.376 [2024-11-15 10:39:23.426147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.376 [2024-11-15 10:39:23.426421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.944 BaseBdev1_malloc 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.944 true 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.944 [2024-11-15 10:39:23.948380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:02.944 [2024-11-15 10:39:23.948455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.944 [2024-11-15 10:39:23.948506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:02.944 [2024-11-15 10:39:23.948531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.944 [2024-11-15 10:39:23.951357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.944 [2024-11-15 10:39:23.951410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:02.944 BaseBdev1 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.944 BaseBdev2_malloc 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.944 10:39:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.944 true 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.944 [2024-11-15 10:39:24.013651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:02.944 [2024-11-15 10:39:24.013720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.944 [2024-11-15 10:39:24.013748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:02.944 [2024-11-15 10:39:24.013766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.944 [2024-11-15 10:39:24.016477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.944 [2024-11-15 10:39:24.016549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:02.944 BaseBdev2 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.944 BaseBdev3_malloc 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.944 true 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.944 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.944 [2024-11-15 10:39:24.098250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:02.944 [2024-11-15 10:39:24.098325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.944 [2024-11-15 10:39:24.098354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:02.944 [2024-11-15 10:39:24.098373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.945 [2024-11-15 10:39:24.101173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.945 [2024-11-15 10:39:24.101225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:03.203 BaseBdev3 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.203 BaseBdev4_malloc 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.203 true 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.203 [2024-11-15 10:39:24.162294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:03.203 [2024-11-15 10:39:24.162362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.203 [2024-11-15 10:39:24.162391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:03.203 [2024-11-15 10:39:24.162409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.203 [2024-11-15 10:39:24.165175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.203 [2024-11-15 10:39:24.165231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:03.203 BaseBdev4 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.203 [2024-11-15 10:39:24.170353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.203 [2024-11-15 10:39:24.172904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.203 [2024-11-15 10:39:24.173013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.203 [2024-11-15 10:39:24.173119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:03.203 [2024-11-15 10:39:24.173414] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:03.203 [2024-11-15 10:39:24.173444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:03.203 [2024-11-15 10:39:24.173780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:03.203 [2024-11-15 10:39:24.174001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:03.203 [2024-11-15 10:39:24.174023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:03.203 [2024-11-15 10:39:24.174257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.203 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.204 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.204 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.204 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.204 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.204 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.204 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.204 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.204 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.204 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.204 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.204 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.204 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.204 "name": "raid_bdev1", 00:11:03.204 "uuid": "6c7a10ce-f1cc-46a7-a886-6795ab0be0a9", 00:11:03.204 "strip_size_kb": 64, 00:11:03.204 "state": "online", 00:11:03.204 "raid_level": "raid0", 00:11:03.204 "superblock": true, 00:11:03.204 "num_base_bdevs": 4, 00:11:03.204 "num_base_bdevs_discovered": 4, 00:11:03.204 "num_base_bdevs_operational": 4, 00:11:03.204 "base_bdevs_list": [ 00:11:03.204 { 00:11:03.204 "name": "BaseBdev1", 00:11:03.204 "uuid": "91a81895-2411-53ee-a816-8b3bd88d0d81", 00:11:03.204 "is_configured": true, 00:11:03.204 "data_offset": 2048, 00:11:03.204 "data_size": 63488 00:11:03.204 }, 00:11:03.204 { 00:11:03.204 "name": "BaseBdev2", 00:11:03.204 "uuid": "b216d620-d649-5721-bbf4-a48737c0d6f8", 00:11:03.204 "is_configured": true, 00:11:03.204 "data_offset": 2048, 00:11:03.204 "data_size": 63488 00:11:03.204 }, 00:11:03.204 { 00:11:03.204 "name": "BaseBdev3", 00:11:03.204 "uuid": "bf2adbfe-0e13-5eb0-8122-4ba50c96685e", 00:11:03.204 "is_configured": true, 00:11:03.204 "data_offset": 2048, 00:11:03.204 "data_size": 63488 00:11:03.204 }, 00:11:03.204 { 00:11:03.204 "name": "BaseBdev4", 00:11:03.204 "uuid": "ccd5e7c8-eb1f-560a-a8ae-f239ea84653d", 00:11:03.204 "is_configured": true, 00:11:03.204 "data_offset": 2048, 00:11:03.204 "data_size": 63488 00:11:03.204 } 00:11:03.204 ] 00:11:03.204 }' 00:11:03.204 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.204 10:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.771 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:03.771 10:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:03.771 [2024-11-15 10:39:24.867958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.704 "name": "raid_bdev1", 00:11:04.704 "uuid": "6c7a10ce-f1cc-46a7-a886-6795ab0be0a9", 00:11:04.704 "strip_size_kb": 64, 00:11:04.704 "state": "online", 00:11:04.704 "raid_level": "raid0", 00:11:04.704 "superblock": true, 00:11:04.704 "num_base_bdevs": 4, 00:11:04.704 "num_base_bdevs_discovered": 4, 00:11:04.704 "num_base_bdevs_operational": 4, 00:11:04.704 "base_bdevs_list": [ 00:11:04.704 { 00:11:04.704 "name": "BaseBdev1", 00:11:04.704 "uuid": "91a81895-2411-53ee-a816-8b3bd88d0d81", 00:11:04.704 "is_configured": true, 00:11:04.704 "data_offset": 2048, 00:11:04.704 "data_size": 63488 00:11:04.704 }, 00:11:04.704 { 00:11:04.704 "name": "BaseBdev2", 00:11:04.704 "uuid": "b216d620-d649-5721-bbf4-a48737c0d6f8", 00:11:04.704 "is_configured": true, 00:11:04.704 "data_offset": 2048, 00:11:04.704 "data_size": 63488 00:11:04.704 }, 00:11:04.704 { 00:11:04.704 "name": "BaseBdev3", 00:11:04.704 "uuid": "bf2adbfe-0e13-5eb0-8122-4ba50c96685e", 00:11:04.704 "is_configured": true, 00:11:04.704 "data_offset": 2048, 00:11:04.704 "data_size": 63488 00:11:04.704 }, 00:11:04.704 { 00:11:04.704 "name": "BaseBdev4", 00:11:04.704 "uuid": "ccd5e7c8-eb1f-560a-a8ae-f239ea84653d", 00:11:04.704 "is_configured": true, 00:11:04.704 "data_offset": 2048, 00:11:04.704 "data_size": 63488 00:11:04.704 } 00:11:04.704 ] 00:11:04.704 }' 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.704 10:39:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.290 10:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:05.290 10:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.290 10:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.290 [2024-11-15 10:39:26.275083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.290 [2024-11-15 10:39:26.275122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.290 [2024-11-15 10:39:26.278411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.290 [2024-11-15 10:39:26.278642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.290 [2024-11-15 10:39:26.278720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.290 [2024-11-15 10:39:26.278742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:05.290 { 00:11:05.290 "results": [ 00:11:05.290 { 00:11:05.290 "job": "raid_bdev1", 00:11:05.290 "core_mask": "0x1", 00:11:05.290 "workload": "randrw", 00:11:05.290 "percentage": 50, 00:11:05.290 "status": "finished", 00:11:05.290 "queue_depth": 1, 00:11:05.290 "io_size": 131072, 00:11:05.290 "runtime": 1.404548, 00:11:05.290 "iops": 10438.945482817247, 00:11:05.290 "mibps": 1304.868185352156, 00:11:05.290 "io_failed": 1, 00:11:05.290 "io_timeout": 0, 00:11:05.290 "avg_latency_us": 133.5595994866485, 00:11:05.290 "min_latency_us": 40.02909090909091, 00:11:05.290 "max_latency_us": 1839.4763636363637 00:11:05.290 } 00:11:05.290 ], 00:11:05.290 "core_count": 1 00:11:05.290 } 00:11:05.290 10:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.290 10:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71219 00:11:05.290 10:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71219 ']' 00:11:05.290 10:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71219 00:11:05.290 10:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:05.290 10:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.290 10:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71219 00:11:05.290 killing process with pid 71219 00:11:05.290 10:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.290 10:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.290 10:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71219' 00:11:05.291 10:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71219 00:11:05.291 10:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71219 00:11:05.291 [2024-11-15 10:39:26.309150] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.549 [2024-11-15 10:39:26.592610] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.926 10:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:06.926 10:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.N5cjRtY9Hi 00:11:06.926 10:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:06.926 10:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:06.926 10:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:06.926 10:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:06.926 10:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:06.926 10:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:06.926 00:11:06.926 real 0m4.942s 00:11:06.926 user 0m6.138s 00:11:06.926 sys 0m0.565s 00:11:06.926 10:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.926 10:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.926 ************************************ 00:11:06.926 END TEST raid_write_error_test 00:11:06.926 ************************************ 00:11:06.926 10:39:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:06.926 10:39:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:06.926 10:39:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:06.926 10:39:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.926 10:39:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.926 ************************************ 00:11:06.926 START TEST raid_state_function_test 00:11:06.926 ************************************ 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:06.926 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:06.927 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:06.927 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:06.927 Process raid pid: 71369 00:11:06.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.927 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71369 00:11:06.927 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71369' 00:11:06.927 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:06.927 10:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71369 00:11:06.927 10:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71369 ']' 00:11:06.927 10:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.927 10:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.927 10:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.927 10:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.927 10:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.927 [2024-11-15 10:39:27.897880] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:11:06.927 [2024-11-15 10:39:27.898290] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.927 [2024-11-15 10:39:28.083468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.186 [2024-11-15 10:39:28.253682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.444 [2024-11-15 10:39:28.487506] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.444 [2024-11-15 10:39:28.487546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.009 [2024-11-15 10:39:28.908095] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:08.009 [2024-11-15 10:39:28.908172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:08.009 [2024-11-15 10:39:28.908189] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:08.009 [2024-11-15 10:39:28.908205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:08.009 [2024-11-15 10:39:28.908215] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:08.009 [2024-11-15 10:39:28.908229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:08.009 [2024-11-15 10:39:28.908239] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:08.009 [2024-11-15 10:39:28.908252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.009 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.009 "name": "Existed_Raid", 00:11:08.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.009 "strip_size_kb": 64, 00:11:08.009 "state": "configuring", 00:11:08.009 "raid_level": "concat", 00:11:08.009 "superblock": false, 00:11:08.009 "num_base_bdevs": 4, 00:11:08.009 "num_base_bdevs_discovered": 0, 00:11:08.009 "num_base_bdevs_operational": 4, 00:11:08.009 "base_bdevs_list": [ 00:11:08.009 { 00:11:08.009 "name": "BaseBdev1", 00:11:08.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.009 "is_configured": false, 00:11:08.009 "data_offset": 0, 00:11:08.009 "data_size": 0 00:11:08.009 }, 00:11:08.009 { 00:11:08.009 "name": "BaseBdev2", 00:11:08.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.009 "is_configured": false, 00:11:08.009 "data_offset": 0, 00:11:08.009 "data_size": 0 00:11:08.009 }, 00:11:08.009 { 00:11:08.009 "name": "BaseBdev3", 00:11:08.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.009 "is_configured": false, 00:11:08.009 "data_offset": 0, 00:11:08.009 "data_size": 0 00:11:08.009 }, 00:11:08.009 { 00:11:08.009 "name": "BaseBdev4", 00:11:08.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.010 "is_configured": false, 00:11:08.010 "data_offset": 0, 00:11:08.010 "data_size": 0 00:11:08.010 } 00:11:08.010 ] 00:11:08.010 }' 00:11:08.010 10:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.010 10:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.578 [2024-11-15 10:39:29.464154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:08.578 [2024-11-15 10:39:29.464375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.578 [2024-11-15 10:39:29.472123] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:08.578 [2024-11-15 10:39:29.472316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:08.578 [2024-11-15 10:39:29.472445] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:08.578 [2024-11-15 10:39:29.472528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:08.578 [2024-11-15 10:39:29.472720] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:08.578 [2024-11-15 10:39:29.472828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:08.578 [2024-11-15 10:39:29.473051] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:08.578 [2024-11-15 10:39:29.473126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.578 [2024-11-15 10:39:29.518732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.578 BaseBdev1 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:08.578 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.579 [ 00:11:08.579 { 00:11:08.579 "name": "BaseBdev1", 00:11:08.579 "aliases": [ 00:11:08.579 "979d282a-2dce-48e7-9f68-9ecadfc67436" 00:11:08.579 ], 00:11:08.579 "product_name": "Malloc disk", 00:11:08.579 "block_size": 512, 00:11:08.579 "num_blocks": 65536, 00:11:08.579 "uuid": "979d282a-2dce-48e7-9f68-9ecadfc67436", 00:11:08.579 "assigned_rate_limits": { 00:11:08.579 "rw_ios_per_sec": 0, 00:11:08.579 "rw_mbytes_per_sec": 0, 00:11:08.579 "r_mbytes_per_sec": 0, 00:11:08.579 "w_mbytes_per_sec": 0 00:11:08.579 }, 00:11:08.579 "claimed": true, 00:11:08.579 "claim_type": "exclusive_write", 00:11:08.579 "zoned": false, 00:11:08.579 "supported_io_types": { 00:11:08.579 "read": true, 00:11:08.579 "write": true, 00:11:08.579 "unmap": true, 00:11:08.579 "flush": true, 00:11:08.579 "reset": true, 00:11:08.579 "nvme_admin": false, 00:11:08.579 "nvme_io": false, 00:11:08.579 "nvme_io_md": false, 00:11:08.579 "write_zeroes": true, 00:11:08.579 "zcopy": true, 00:11:08.579 "get_zone_info": false, 00:11:08.579 "zone_management": false, 00:11:08.579 "zone_append": false, 00:11:08.579 "compare": false, 00:11:08.579 "compare_and_write": false, 00:11:08.579 "abort": true, 00:11:08.579 "seek_hole": false, 00:11:08.579 "seek_data": false, 00:11:08.579 "copy": true, 00:11:08.579 "nvme_iov_md": false 00:11:08.579 }, 00:11:08.579 "memory_domains": [ 00:11:08.579 { 00:11:08.579 "dma_device_id": "system", 00:11:08.579 "dma_device_type": 1 00:11:08.579 }, 00:11:08.579 { 00:11:08.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.579 "dma_device_type": 2 00:11:08.579 } 00:11:08.579 ], 00:11:08.579 "driver_specific": {} 00:11:08.579 } 00:11:08.579 ] 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.579 "name": "Existed_Raid", 00:11:08.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.579 "strip_size_kb": 64, 00:11:08.579 "state": "configuring", 00:11:08.579 "raid_level": "concat", 00:11:08.579 "superblock": false, 00:11:08.579 "num_base_bdevs": 4, 00:11:08.579 "num_base_bdevs_discovered": 1, 00:11:08.579 "num_base_bdevs_operational": 4, 00:11:08.579 "base_bdevs_list": [ 00:11:08.579 { 00:11:08.579 "name": "BaseBdev1", 00:11:08.579 "uuid": "979d282a-2dce-48e7-9f68-9ecadfc67436", 00:11:08.579 "is_configured": true, 00:11:08.579 "data_offset": 0, 00:11:08.579 "data_size": 65536 00:11:08.579 }, 00:11:08.579 { 00:11:08.579 "name": "BaseBdev2", 00:11:08.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.579 "is_configured": false, 00:11:08.579 "data_offset": 0, 00:11:08.579 "data_size": 0 00:11:08.579 }, 00:11:08.579 { 00:11:08.579 "name": "BaseBdev3", 00:11:08.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.579 "is_configured": false, 00:11:08.579 "data_offset": 0, 00:11:08.579 "data_size": 0 00:11:08.579 }, 00:11:08.579 { 00:11:08.579 "name": "BaseBdev4", 00:11:08.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.579 "is_configured": false, 00:11:08.579 "data_offset": 0, 00:11:08.579 "data_size": 0 00:11:08.579 } 00:11:08.579 ] 00:11:08.579 }' 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.579 10:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.146 [2024-11-15 10:39:30.098932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.146 [2024-11-15 10:39:30.098998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.146 [2024-11-15 10:39:30.106973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.146 [2024-11-15 10:39:30.109408] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.146 [2024-11-15 10:39:30.109459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.146 [2024-11-15 10:39:30.109476] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:09.146 [2024-11-15 10:39:30.109508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:09.146 [2024-11-15 10:39:30.109522] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:09.146 [2024-11-15 10:39:30.109538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.146 "name": "Existed_Raid", 00:11:09.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.146 "strip_size_kb": 64, 00:11:09.146 "state": "configuring", 00:11:09.146 "raid_level": "concat", 00:11:09.146 "superblock": false, 00:11:09.146 "num_base_bdevs": 4, 00:11:09.146 "num_base_bdevs_discovered": 1, 00:11:09.146 "num_base_bdevs_operational": 4, 00:11:09.146 "base_bdevs_list": [ 00:11:09.146 { 00:11:09.146 "name": "BaseBdev1", 00:11:09.146 "uuid": "979d282a-2dce-48e7-9f68-9ecadfc67436", 00:11:09.146 "is_configured": true, 00:11:09.146 "data_offset": 0, 00:11:09.146 "data_size": 65536 00:11:09.146 }, 00:11:09.146 { 00:11:09.146 "name": "BaseBdev2", 00:11:09.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.146 "is_configured": false, 00:11:09.146 "data_offset": 0, 00:11:09.146 "data_size": 0 00:11:09.146 }, 00:11:09.146 { 00:11:09.146 "name": "BaseBdev3", 00:11:09.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.146 "is_configured": false, 00:11:09.146 "data_offset": 0, 00:11:09.146 "data_size": 0 00:11:09.146 }, 00:11:09.146 { 00:11:09.146 "name": "BaseBdev4", 00:11:09.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.146 "is_configured": false, 00:11:09.146 "data_offset": 0, 00:11:09.146 "data_size": 0 00:11:09.146 } 00:11:09.146 ] 00:11:09.146 }' 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.146 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.713 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:09.713 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.714 [2024-11-15 10:39:30.665829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.714 BaseBdev2 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.714 [ 00:11:09.714 { 00:11:09.714 "name": "BaseBdev2", 00:11:09.714 "aliases": [ 00:11:09.714 "01df9aad-bb60-4741-b482-fc476d73b4b0" 00:11:09.714 ], 00:11:09.714 "product_name": "Malloc disk", 00:11:09.714 "block_size": 512, 00:11:09.714 "num_blocks": 65536, 00:11:09.714 "uuid": "01df9aad-bb60-4741-b482-fc476d73b4b0", 00:11:09.714 "assigned_rate_limits": { 00:11:09.714 "rw_ios_per_sec": 0, 00:11:09.714 "rw_mbytes_per_sec": 0, 00:11:09.714 "r_mbytes_per_sec": 0, 00:11:09.714 "w_mbytes_per_sec": 0 00:11:09.714 }, 00:11:09.714 "claimed": true, 00:11:09.714 "claim_type": "exclusive_write", 00:11:09.714 "zoned": false, 00:11:09.714 "supported_io_types": { 00:11:09.714 "read": true, 00:11:09.714 "write": true, 00:11:09.714 "unmap": true, 00:11:09.714 "flush": true, 00:11:09.714 "reset": true, 00:11:09.714 "nvme_admin": false, 00:11:09.714 "nvme_io": false, 00:11:09.714 "nvme_io_md": false, 00:11:09.714 "write_zeroes": true, 00:11:09.714 "zcopy": true, 00:11:09.714 "get_zone_info": false, 00:11:09.714 "zone_management": false, 00:11:09.714 "zone_append": false, 00:11:09.714 "compare": false, 00:11:09.714 "compare_and_write": false, 00:11:09.714 "abort": true, 00:11:09.714 "seek_hole": false, 00:11:09.714 "seek_data": false, 00:11:09.714 "copy": true, 00:11:09.714 "nvme_iov_md": false 00:11:09.714 }, 00:11:09.714 "memory_domains": [ 00:11:09.714 { 00:11:09.714 "dma_device_id": "system", 00:11:09.714 "dma_device_type": 1 00:11:09.714 }, 00:11:09.714 { 00:11:09.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.714 "dma_device_type": 2 00:11:09.714 } 00:11:09.714 ], 00:11:09.714 "driver_specific": {} 00:11:09.714 } 00:11:09.714 ] 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.714 "name": "Existed_Raid", 00:11:09.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.714 "strip_size_kb": 64, 00:11:09.714 "state": "configuring", 00:11:09.714 "raid_level": "concat", 00:11:09.714 "superblock": false, 00:11:09.714 "num_base_bdevs": 4, 00:11:09.714 "num_base_bdevs_discovered": 2, 00:11:09.714 "num_base_bdevs_operational": 4, 00:11:09.714 "base_bdevs_list": [ 00:11:09.714 { 00:11:09.714 "name": "BaseBdev1", 00:11:09.714 "uuid": "979d282a-2dce-48e7-9f68-9ecadfc67436", 00:11:09.714 "is_configured": true, 00:11:09.714 "data_offset": 0, 00:11:09.714 "data_size": 65536 00:11:09.714 }, 00:11:09.714 { 00:11:09.714 "name": "BaseBdev2", 00:11:09.714 "uuid": "01df9aad-bb60-4741-b482-fc476d73b4b0", 00:11:09.714 "is_configured": true, 00:11:09.714 "data_offset": 0, 00:11:09.714 "data_size": 65536 00:11:09.714 }, 00:11:09.714 { 00:11:09.714 "name": "BaseBdev3", 00:11:09.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.714 "is_configured": false, 00:11:09.714 "data_offset": 0, 00:11:09.714 "data_size": 0 00:11:09.714 }, 00:11:09.714 { 00:11:09.714 "name": "BaseBdev4", 00:11:09.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.714 "is_configured": false, 00:11:09.714 "data_offset": 0, 00:11:09.714 "data_size": 0 00:11:09.714 } 00:11:09.714 ] 00:11:09.714 }' 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.714 10:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.280 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:10.280 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.280 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.280 [2024-11-15 10:39:31.246868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.280 BaseBdev3 00:11:10.280 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.280 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:10.280 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:10.280 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.280 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:10.280 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.280 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.280 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.280 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.280 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.281 [ 00:11:10.281 { 00:11:10.281 "name": "BaseBdev3", 00:11:10.281 "aliases": [ 00:11:10.281 "b676a97d-80e6-44c0-b4b9-3d11101b1117" 00:11:10.281 ], 00:11:10.281 "product_name": "Malloc disk", 00:11:10.281 "block_size": 512, 00:11:10.281 "num_blocks": 65536, 00:11:10.281 "uuid": "b676a97d-80e6-44c0-b4b9-3d11101b1117", 00:11:10.281 "assigned_rate_limits": { 00:11:10.281 "rw_ios_per_sec": 0, 00:11:10.281 "rw_mbytes_per_sec": 0, 00:11:10.281 "r_mbytes_per_sec": 0, 00:11:10.281 "w_mbytes_per_sec": 0 00:11:10.281 }, 00:11:10.281 "claimed": true, 00:11:10.281 "claim_type": "exclusive_write", 00:11:10.281 "zoned": false, 00:11:10.281 "supported_io_types": { 00:11:10.281 "read": true, 00:11:10.281 "write": true, 00:11:10.281 "unmap": true, 00:11:10.281 "flush": true, 00:11:10.281 "reset": true, 00:11:10.281 "nvme_admin": false, 00:11:10.281 "nvme_io": false, 00:11:10.281 "nvme_io_md": false, 00:11:10.281 "write_zeroes": true, 00:11:10.281 "zcopy": true, 00:11:10.281 "get_zone_info": false, 00:11:10.281 "zone_management": false, 00:11:10.281 "zone_append": false, 00:11:10.281 "compare": false, 00:11:10.281 "compare_and_write": false, 00:11:10.281 "abort": true, 00:11:10.281 "seek_hole": false, 00:11:10.281 "seek_data": false, 00:11:10.281 "copy": true, 00:11:10.281 "nvme_iov_md": false 00:11:10.281 }, 00:11:10.281 "memory_domains": [ 00:11:10.281 { 00:11:10.281 "dma_device_id": "system", 00:11:10.281 "dma_device_type": 1 00:11:10.281 }, 00:11:10.281 { 00:11:10.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.281 "dma_device_type": 2 00:11:10.281 } 00:11:10.281 ], 00:11:10.281 "driver_specific": {} 00:11:10.281 } 00:11:10.281 ] 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.281 "name": "Existed_Raid", 00:11:10.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.281 "strip_size_kb": 64, 00:11:10.281 "state": "configuring", 00:11:10.281 "raid_level": "concat", 00:11:10.281 "superblock": false, 00:11:10.281 "num_base_bdevs": 4, 00:11:10.281 "num_base_bdevs_discovered": 3, 00:11:10.281 "num_base_bdevs_operational": 4, 00:11:10.281 "base_bdevs_list": [ 00:11:10.281 { 00:11:10.281 "name": "BaseBdev1", 00:11:10.281 "uuid": "979d282a-2dce-48e7-9f68-9ecadfc67436", 00:11:10.281 "is_configured": true, 00:11:10.281 "data_offset": 0, 00:11:10.281 "data_size": 65536 00:11:10.281 }, 00:11:10.281 { 00:11:10.281 "name": "BaseBdev2", 00:11:10.281 "uuid": "01df9aad-bb60-4741-b482-fc476d73b4b0", 00:11:10.281 "is_configured": true, 00:11:10.281 "data_offset": 0, 00:11:10.281 "data_size": 65536 00:11:10.281 }, 00:11:10.281 { 00:11:10.281 "name": "BaseBdev3", 00:11:10.281 "uuid": "b676a97d-80e6-44c0-b4b9-3d11101b1117", 00:11:10.281 "is_configured": true, 00:11:10.281 "data_offset": 0, 00:11:10.281 "data_size": 65536 00:11:10.281 }, 00:11:10.281 { 00:11:10.281 "name": "BaseBdev4", 00:11:10.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.281 "is_configured": false, 00:11:10.281 "data_offset": 0, 00:11:10.281 "data_size": 0 00:11:10.281 } 00:11:10.281 ] 00:11:10.281 }' 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.281 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.847 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:10.847 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.848 [2024-11-15 10:39:31.856453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:10.848 [2024-11-15 10:39:31.856712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:10.848 [2024-11-15 10:39:31.856755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:10.848 [2024-11-15 10:39:31.857157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:10.848 [2024-11-15 10:39:31.857379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:10.848 [2024-11-15 10:39:31.857403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:10.848 [2024-11-15 10:39:31.857739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.848 BaseBdev4 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.848 [ 00:11:10.848 { 00:11:10.848 "name": "BaseBdev4", 00:11:10.848 "aliases": [ 00:11:10.848 "f0dc0263-95a3-4ab4-a96a-26ae34daa5db" 00:11:10.848 ], 00:11:10.848 "product_name": "Malloc disk", 00:11:10.848 "block_size": 512, 00:11:10.848 "num_blocks": 65536, 00:11:10.848 "uuid": "f0dc0263-95a3-4ab4-a96a-26ae34daa5db", 00:11:10.848 "assigned_rate_limits": { 00:11:10.848 "rw_ios_per_sec": 0, 00:11:10.848 "rw_mbytes_per_sec": 0, 00:11:10.848 "r_mbytes_per_sec": 0, 00:11:10.848 "w_mbytes_per_sec": 0 00:11:10.848 }, 00:11:10.848 "claimed": true, 00:11:10.848 "claim_type": "exclusive_write", 00:11:10.848 "zoned": false, 00:11:10.848 "supported_io_types": { 00:11:10.848 "read": true, 00:11:10.848 "write": true, 00:11:10.848 "unmap": true, 00:11:10.848 "flush": true, 00:11:10.848 "reset": true, 00:11:10.848 "nvme_admin": false, 00:11:10.848 "nvme_io": false, 00:11:10.848 "nvme_io_md": false, 00:11:10.848 "write_zeroes": true, 00:11:10.848 "zcopy": true, 00:11:10.848 "get_zone_info": false, 00:11:10.848 "zone_management": false, 00:11:10.848 "zone_append": false, 00:11:10.848 "compare": false, 00:11:10.848 "compare_and_write": false, 00:11:10.848 "abort": true, 00:11:10.848 "seek_hole": false, 00:11:10.848 "seek_data": false, 00:11:10.848 "copy": true, 00:11:10.848 "nvme_iov_md": false 00:11:10.848 }, 00:11:10.848 "memory_domains": [ 00:11:10.848 { 00:11:10.848 "dma_device_id": "system", 00:11:10.848 "dma_device_type": 1 00:11:10.848 }, 00:11:10.848 { 00:11:10.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.848 "dma_device_type": 2 00:11:10.848 } 00:11:10.848 ], 00:11:10.848 "driver_specific": {} 00:11:10.848 } 00:11:10.848 ] 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.848 "name": "Existed_Raid", 00:11:10.848 "uuid": "8b5dc78f-fb57-4556-b127-a31a81d650f2", 00:11:10.848 "strip_size_kb": 64, 00:11:10.848 "state": "online", 00:11:10.848 "raid_level": "concat", 00:11:10.848 "superblock": false, 00:11:10.848 "num_base_bdevs": 4, 00:11:10.848 "num_base_bdevs_discovered": 4, 00:11:10.848 "num_base_bdevs_operational": 4, 00:11:10.848 "base_bdevs_list": [ 00:11:10.848 { 00:11:10.848 "name": "BaseBdev1", 00:11:10.848 "uuid": "979d282a-2dce-48e7-9f68-9ecadfc67436", 00:11:10.848 "is_configured": true, 00:11:10.848 "data_offset": 0, 00:11:10.848 "data_size": 65536 00:11:10.848 }, 00:11:10.848 { 00:11:10.848 "name": "BaseBdev2", 00:11:10.848 "uuid": "01df9aad-bb60-4741-b482-fc476d73b4b0", 00:11:10.848 "is_configured": true, 00:11:10.848 "data_offset": 0, 00:11:10.848 "data_size": 65536 00:11:10.848 }, 00:11:10.848 { 00:11:10.848 "name": "BaseBdev3", 00:11:10.848 "uuid": "b676a97d-80e6-44c0-b4b9-3d11101b1117", 00:11:10.848 "is_configured": true, 00:11:10.848 "data_offset": 0, 00:11:10.848 "data_size": 65536 00:11:10.848 }, 00:11:10.848 { 00:11:10.848 "name": "BaseBdev4", 00:11:10.848 "uuid": "f0dc0263-95a3-4ab4-a96a-26ae34daa5db", 00:11:10.848 "is_configured": true, 00:11:10.848 "data_offset": 0, 00:11:10.848 "data_size": 65536 00:11:10.848 } 00:11:10.848 ] 00:11:10.848 }' 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.848 10:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.427 [2024-11-15 10:39:32.389187] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.427 "name": "Existed_Raid", 00:11:11.427 "aliases": [ 00:11:11.427 "8b5dc78f-fb57-4556-b127-a31a81d650f2" 00:11:11.427 ], 00:11:11.427 "product_name": "Raid Volume", 00:11:11.427 "block_size": 512, 00:11:11.427 "num_blocks": 262144, 00:11:11.427 "uuid": "8b5dc78f-fb57-4556-b127-a31a81d650f2", 00:11:11.427 "assigned_rate_limits": { 00:11:11.427 "rw_ios_per_sec": 0, 00:11:11.427 "rw_mbytes_per_sec": 0, 00:11:11.427 "r_mbytes_per_sec": 0, 00:11:11.427 "w_mbytes_per_sec": 0 00:11:11.427 }, 00:11:11.427 "claimed": false, 00:11:11.427 "zoned": false, 00:11:11.427 "supported_io_types": { 00:11:11.427 "read": true, 00:11:11.427 "write": true, 00:11:11.427 "unmap": true, 00:11:11.427 "flush": true, 00:11:11.427 "reset": true, 00:11:11.427 "nvme_admin": false, 00:11:11.427 "nvme_io": false, 00:11:11.427 "nvme_io_md": false, 00:11:11.427 "write_zeroes": true, 00:11:11.427 "zcopy": false, 00:11:11.427 "get_zone_info": false, 00:11:11.427 "zone_management": false, 00:11:11.427 "zone_append": false, 00:11:11.427 "compare": false, 00:11:11.427 "compare_and_write": false, 00:11:11.427 "abort": false, 00:11:11.427 "seek_hole": false, 00:11:11.427 "seek_data": false, 00:11:11.427 "copy": false, 00:11:11.427 "nvme_iov_md": false 00:11:11.427 }, 00:11:11.427 "memory_domains": [ 00:11:11.427 { 00:11:11.427 "dma_device_id": "system", 00:11:11.427 "dma_device_type": 1 00:11:11.427 }, 00:11:11.427 { 00:11:11.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.427 "dma_device_type": 2 00:11:11.427 }, 00:11:11.427 { 00:11:11.427 "dma_device_id": "system", 00:11:11.427 "dma_device_type": 1 00:11:11.427 }, 00:11:11.427 { 00:11:11.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.427 "dma_device_type": 2 00:11:11.427 }, 00:11:11.427 { 00:11:11.427 "dma_device_id": "system", 00:11:11.427 "dma_device_type": 1 00:11:11.427 }, 00:11:11.427 { 00:11:11.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.427 "dma_device_type": 2 00:11:11.427 }, 00:11:11.427 { 00:11:11.427 "dma_device_id": "system", 00:11:11.427 "dma_device_type": 1 00:11:11.427 }, 00:11:11.427 { 00:11:11.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.427 "dma_device_type": 2 00:11:11.427 } 00:11:11.427 ], 00:11:11.427 "driver_specific": { 00:11:11.427 "raid": { 00:11:11.427 "uuid": "8b5dc78f-fb57-4556-b127-a31a81d650f2", 00:11:11.427 "strip_size_kb": 64, 00:11:11.427 "state": "online", 00:11:11.427 "raid_level": "concat", 00:11:11.427 "superblock": false, 00:11:11.427 "num_base_bdevs": 4, 00:11:11.427 "num_base_bdevs_discovered": 4, 00:11:11.427 "num_base_bdevs_operational": 4, 00:11:11.427 "base_bdevs_list": [ 00:11:11.427 { 00:11:11.427 "name": "BaseBdev1", 00:11:11.427 "uuid": "979d282a-2dce-48e7-9f68-9ecadfc67436", 00:11:11.427 "is_configured": true, 00:11:11.427 "data_offset": 0, 00:11:11.427 "data_size": 65536 00:11:11.427 }, 00:11:11.427 { 00:11:11.427 "name": "BaseBdev2", 00:11:11.427 "uuid": "01df9aad-bb60-4741-b482-fc476d73b4b0", 00:11:11.427 "is_configured": true, 00:11:11.427 "data_offset": 0, 00:11:11.427 "data_size": 65536 00:11:11.427 }, 00:11:11.427 { 00:11:11.427 "name": "BaseBdev3", 00:11:11.427 "uuid": "b676a97d-80e6-44c0-b4b9-3d11101b1117", 00:11:11.427 "is_configured": true, 00:11:11.427 "data_offset": 0, 00:11:11.427 "data_size": 65536 00:11:11.427 }, 00:11:11.427 { 00:11:11.427 "name": "BaseBdev4", 00:11:11.427 "uuid": "f0dc0263-95a3-4ab4-a96a-26ae34daa5db", 00:11:11.427 "is_configured": true, 00:11:11.427 "data_offset": 0, 00:11:11.427 "data_size": 65536 00:11:11.427 } 00:11:11.427 ] 00:11:11.427 } 00:11:11.427 } 00:11:11.427 }' 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:11.427 BaseBdev2 00:11:11.427 BaseBdev3 00:11:11.427 BaseBdev4' 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.427 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.686 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.686 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.686 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.686 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:11.686 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.686 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.687 [2024-11-15 10:39:32.744813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:11.687 [2024-11-15 10:39:32.744853] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.687 [2024-11-15 10:39:32.744917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.687 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.944 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.944 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.945 "name": "Existed_Raid", 00:11:11.945 "uuid": "8b5dc78f-fb57-4556-b127-a31a81d650f2", 00:11:11.945 "strip_size_kb": 64, 00:11:11.945 "state": "offline", 00:11:11.945 "raid_level": "concat", 00:11:11.945 "superblock": false, 00:11:11.945 "num_base_bdevs": 4, 00:11:11.945 "num_base_bdevs_discovered": 3, 00:11:11.945 "num_base_bdevs_operational": 3, 00:11:11.945 "base_bdevs_list": [ 00:11:11.945 { 00:11:11.945 "name": null, 00:11:11.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.945 "is_configured": false, 00:11:11.945 "data_offset": 0, 00:11:11.945 "data_size": 65536 00:11:11.945 }, 00:11:11.945 { 00:11:11.945 "name": "BaseBdev2", 00:11:11.945 "uuid": "01df9aad-bb60-4741-b482-fc476d73b4b0", 00:11:11.945 "is_configured": true, 00:11:11.945 "data_offset": 0, 00:11:11.945 "data_size": 65536 00:11:11.945 }, 00:11:11.945 { 00:11:11.945 "name": "BaseBdev3", 00:11:11.945 "uuid": "b676a97d-80e6-44c0-b4b9-3d11101b1117", 00:11:11.945 "is_configured": true, 00:11:11.945 "data_offset": 0, 00:11:11.945 "data_size": 65536 00:11:11.945 }, 00:11:11.945 { 00:11:11.945 "name": "BaseBdev4", 00:11:11.945 "uuid": "f0dc0263-95a3-4ab4-a96a-26ae34daa5db", 00:11:11.945 "is_configured": true, 00:11:11.945 "data_offset": 0, 00:11:11.945 "data_size": 65536 00:11:11.945 } 00:11:11.945 ] 00:11:11.945 }' 00:11:11.945 10:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.945 10:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.202 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:12.202 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.202 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.202 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.202 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:12.202 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.460 [2024-11-15 10:39:33.406160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.460 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.460 [2024-11-15 10:39:33.549640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.718 [2024-11-15 10:39:33.696008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:12.718 [2024-11-15 10:39:33.696082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:12.718 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:12.719 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.719 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:12.719 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.719 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.719 BaseBdev2 00:11:12.719 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.719 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:12.719 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:12.719 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.719 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.719 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.719 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.719 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.719 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.719 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.977 [ 00:11:12.977 { 00:11:12.977 "name": "BaseBdev2", 00:11:12.977 "aliases": [ 00:11:12.977 "d3739f3f-412d-46f1-a9fe-588997bed8de" 00:11:12.977 ], 00:11:12.977 "product_name": "Malloc disk", 00:11:12.977 "block_size": 512, 00:11:12.977 "num_blocks": 65536, 00:11:12.977 "uuid": "d3739f3f-412d-46f1-a9fe-588997bed8de", 00:11:12.977 "assigned_rate_limits": { 00:11:12.977 "rw_ios_per_sec": 0, 00:11:12.977 "rw_mbytes_per_sec": 0, 00:11:12.977 "r_mbytes_per_sec": 0, 00:11:12.977 "w_mbytes_per_sec": 0 00:11:12.977 }, 00:11:12.977 "claimed": false, 00:11:12.977 "zoned": false, 00:11:12.977 "supported_io_types": { 00:11:12.977 "read": true, 00:11:12.977 "write": true, 00:11:12.977 "unmap": true, 00:11:12.977 "flush": true, 00:11:12.977 "reset": true, 00:11:12.977 "nvme_admin": false, 00:11:12.977 "nvme_io": false, 00:11:12.977 "nvme_io_md": false, 00:11:12.977 "write_zeroes": true, 00:11:12.977 "zcopy": true, 00:11:12.977 "get_zone_info": false, 00:11:12.977 "zone_management": false, 00:11:12.977 "zone_append": false, 00:11:12.977 "compare": false, 00:11:12.977 "compare_and_write": false, 00:11:12.977 "abort": true, 00:11:12.977 "seek_hole": false, 00:11:12.977 "seek_data": false, 00:11:12.977 "copy": true, 00:11:12.977 "nvme_iov_md": false 00:11:12.977 }, 00:11:12.977 "memory_domains": [ 00:11:12.977 { 00:11:12.977 "dma_device_id": "system", 00:11:12.977 "dma_device_type": 1 00:11:12.977 }, 00:11:12.977 { 00:11:12.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.977 "dma_device_type": 2 00:11:12.977 } 00:11:12.977 ], 00:11:12.977 "driver_specific": {} 00:11:12.977 } 00:11:12.977 ] 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.977 BaseBdev3 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.977 [ 00:11:12.977 { 00:11:12.977 "name": "BaseBdev3", 00:11:12.977 "aliases": [ 00:11:12.977 "4d450f73-872a-464e-8d7d-64334a5c2ddd" 00:11:12.977 ], 00:11:12.977 "product_name": "Malloc disk", 00:11:12.977 "block_size": 512, 00:11:12.977 "num_blocks": 65536, 00:11:12.977 "uuid": "4d450f73-872a-464e-8d7d-64334a5c2ddd", 00:11:12.977 "assigned_rate_limits": { 00:11:12.977 "rw_ios_per_sec": 0, 00:11:12.977 "rw_mbytes_per_sec": 0, 00:11:12.977 "r_mbytes_per_sec": 0, 00:11:12.977 "w_mbytes_per_sec": 0 00:11:12.977 }, 00:11:12.977 "claimed": false, 00:11:12.977 "zoned": false, 00:11:12.977 "supported_io_types": { 00:11:12.977 "read": true, 00:11:12.977 "write": true, 00:11:12.977 "unmap": true, 00:11:12.977 "flush": true, 00:11:12.977 "reset": true, 00:11:12.977 "nvme_admin": false, 00:11:12.977 "nvme_io": false, 00:11:12.977 "nvme_io_md": false, 00:11:12.977 "write_zeroes": true, 00:11:12.977 "zcopy": true, 00:11:12.977 "get_zone_info": false, 00:11:12.977 "zone_management": false, 00:11:12.977 "zone_append": false, 00:11:12.977 "compare": false, 00:11:12.977 "compare_and_write": false, 00:11:12.977 "abort": true, 00:11:12.977 "seek_hole": false, 00:11:12.977 "seek_data": false, 00:11:12.977 "copy": true, 00:11:12.977 "nvme_iov_md": false 00:11:12.977 }, 00:11:12.977 "memory_domains": [ 00:11:12.977 { 00:11:12.977 "dma_device_id": "system", 00:11:12.977 "dma_device_type": 1 00:11:12.977 }, 00:11:12.977 { 00:11:12.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.977 "dma_device_type": 2 00:11:12.977 } 00:11:12.977 ], 00:11:12.977 "driver_specific": {} 00:11:12.977 } 00:11:12.977 ] 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.977 10:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.977 BaseBdev4 00:11:12.977 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.977 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:12.977 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:12.977 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.977 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.977 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.977 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.977 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.977 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.977 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.977 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.977 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:12.977 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.977 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.977 [ 00:11:12.977 { 00:11:12.977 "name": "BaseBdev4", 00:11:12.977 "aliases": [ 00:11:12.977 "e166e0cd-04a2-47ec-a20e-210346389d74" 00:11:12.977 ], 00:11:12.977 "product_name": "Malloc disk", 00:11:12.977 "block_size": 512, 00:11:12.977 "num_blocks": 65536, 00:11:12.977 "uuid": "e166e0cd-04a2-47ec-a20e-210346389d74", 00:11:12.977 "assigned_rate_limits": { 00:11:12.978 "rw_ios_per_sec": 0, 00:11:12.978 "rw_mbytes_per_sec": 0, 00:11:12.978 "r_mbytes_per_sec": 0, 00:11:12.978 "w_mbytes_per_sec": 0 00:11:12.978 }, 00:11:12.978 "claimed": false, 00:11:12.978 "zoned": false, 00:11:12.978 "supported_io_types": { 00:11:12.978 "read": true, 00:11:12.978 "write": true, 00:11:12.978 "unmap": true, 00:11:12.978 "flush": true, 00:11:12.978 "reset": true, 00:11:12.978 "nvme_admin": false, 00:11:12.978 "nvme_io": false, 00:11:12.978 "nvme_io_md": false, 00:11:12.978 "write_zeroes": true, 00:11:12.978 "zcopy": true, 00:11:12.978 "get_zone_info": false, 00:11:12.978 "zone_management": false, 00:11:12.978 "zone_append": false, 00:11:12.978 "compare": false, 00:11:12.978 "compare_and_write": false, 00:11:12.978 "abort": true, 00:11:12.978 "seek_hole": false, 00:11:12.978 "seek_data": false, 00:11:12.978 "copy": true, 00:11:12.978 "nvme_iov_md": false 00:11:12.978 }, 00:11:12.978 "memory_domains": [ 00:11:12.978 { 00:11:12.978 "dma_device_id": "system", 00:11:12.978 "dma_device_type": 1 00:11:12.978 }, 00:11:12.978 { 00:11:12.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.978 "dma_device_type": 2 00:11:12.978 } 00:11:12.978 ], 00:11:12.978 "driver_specific": {} 00:11:12.978 } 00:11:12.978 ] 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.978 [2024-11-15 10:39:34.065104] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:12.978 [2024-11-15 10:39:34.065279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:12.978 [2024-11-15 10:39:34.065328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.978 [2024-11-15 10:39:34.067733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.978 [2024-11-15 10:39:34.067808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.978 "name": "Existed_Raid", 00:11:12.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.978 "strip_size_kb": 64, 00:11:12.978 "state": "configuring", 00:11:12.978 "raid_level": "concat", 00:11:12.978 "superblock": false, 00:11:12.978 "num_base_bdevs": 4, 00:11:12.978 "num_base_bdevs_discovered": 3, 00:11:12.978 "num_base_bdevs_operational": 4, 00:11:12.978 "base_bdevs_list": [ 00:11:12.978 { 00:11:12.978 "name": "BaseBdev1", 00:11:12.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.978 "is_configured": false, 00:11:12.978 "data_offset": 0, 00:11:12.978 "data_size": 0 00:11:12.978 }, 00:11:12.978 { 00:11:12.978 "name": "BaseBdev2", 00:11:12.978 "uuid": "d3739f3f-412d-46f1-a9fe-588997bed8de", 00:11:12.978 "is_configured": true, 00:11:12.978 "data_offset": 0, 00:11:12.978 "data_size": 65536 00:11:12.978 }, 00:11:12.978 { 00:11:12.978 "name": "BaseBdev3", 00:11:12.978 "uuid": "4d450f73-872a-464e-8d7d-64334a5c2ddd", 00:11:12.978 "is_configured": true, 00:11:12.978 "data_offset": 0, 00:11:12.978 "data_size": 65536 00:11:12.978 }, 00:11:12.978 { 00:11:12.978 "name": "BaseBdev4", 00:11:12.978 "uuid": "e166e0cd-04a2-47ec-a20e-210346389d74", 00:11:12.978 "is_configured": true, 00:11:12.978 "data_offset": 0, 00:11:12.978 "data_size": 65536 00:11:12.978 } 00:11:12.978 ] 00:11:12.978 }' 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.978 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.583 [2024-11-15 10:39:34.577360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.583 "name": "Existed_Raid", 00:11:13.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.583 "strip_size_kb": 64, 00:11:13.583 "state": "configuring", 00:11:13.583 "raid_level": "concat", 00:11:13.583 "superblock": false, 00:11:13.583 "num_base_bdevs": 4, 00:11:13.583 "num_base_bdevs_discovered": 2, 00:11:13.583 "num_base_bdevs_operational": 4, 00:11:13.583 "base_bdevs_list": [ 00:11:13.583 { 00:11:13.583 "name": "BaseBdev1", 00:11:13.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.583 "is_configured": false, 00:11:13.583 "data_offset": 0, 00:11:13.583 "data_size": 0 00:11:13.583 }, 00:11:13.583 { 00:11:13.583 "name": null, 00:11:13.583 "uuid": "d3739f3f-412d-46f1-a9fe-588997bed8de", 00:11:13.583 "is_configured": false, 00:11:13.583 "data_offset": 0, 00:11:13.583 "data_size": 65536 00:11:13.583 }, 00:11:13.583 { 00:11:13.583 "name": "BaseBdev3", 00:11:13.583 "uuid": "4d450f73-872a-464e-8d7d-64334a5c2ddd", 00:11:13.583 "is_configured": true, 00:11:13.583 "data_offset": 0, 00:11:13.583 "data_size": 65536 00:11:13.583 }, 00:11:13.583 { 00:11:13.583 "name": "BaseBdev4", 00:11:13.583 "uuid": "e166e0cd-04a2-47ec-a20e-210346389d74", 00:11:13.583 "is_configured": true, 00:11:13.583 "data_offset": 0, 00:11:13.583 "data_size": 65536 00:11:13.583 } 00:11:13.583 ] 00:11:13.583 }' 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.583 10:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.149 [2024-11-15 10:39:35.187713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.149 BaseBdev1 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.149 [ 00:11:14.149 { 00:11:14.149 "name": "BaseBdev1", 00:11:14.149 "aliases": [ 00:11:14.149 "c6d215bd-3b85-4b9d-b894-bab35bbf91da" 00:11:14.149 ], 00:11:14.149 "product_name": "Malloc disk", 00:11:14.149 "block_size": 512, 00:11:14.149 "num_blocks": 65536, 00:11:14.149 "uuid": "c6d215bd-3b85-4b9d-b894-bab35bbf91da", 00:11:14.149 "assigned_rate_limits": { 00:11:14.149 "rw_ios_per_sec": 0, 00:11:14.149 "rw_mbytes_per_sec": 0, 00:11:14.149 "r_mbytes_per_sec": 0, 00:11:14.149 "w_mbytes_per_sec": 0 00:11:14.149 }, 00:11:14.149 "claimed": true, 00:11:14.149 "claim_type": "exclusive_write", 00:11:14.149 "zoned": false, 00:11:14.149 "supported_io_types": { 00:11:14.149 "read": true, 00:11:14.149 "write": true, 00:11:14.149 "unmap": true, 00:11:14.149 "flush": true, 00:11:14.149 "reset": true, 00:11:14.149 "nvme_admin": false, 00:11:14.149 "nvme_io": false, 00:11:14.149 "nvme_io_md": false, 00:11:14.149 "write_zeroes": true, 00:11:14.149 "zcopy": true, 00:11:14.149 "get_zone_info": false, 00:11:14.149 "zone_management": false, 00:11:14.149 "zone_append": false, 00:11:14.149 "compare": false, 00:11:14.149 "compare_and_write": false, 00:11:14.149 "abort": true, 00:11:14.149 "seek_hole": false, 00:11:14.149 "seek_data": false, 00:11:14.149 "copy": true, 00:11:14.149 "nvme_iov_md": false 00:11:14.149 }, 00:11:14.149 "memory_domains": [ 00:11:14.149 { 00:11:14.149 "dma_device_id": "system", 00:11:14.149 "dma_device_type": 1 00:11:14.149 }, 00:11:14.149 { 00:11:14.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.149 "dma_device_type": 2 00:11:14.149 } 00:11:14.149 ], 00:11:14.149 "driver_specific": {} 00:11:14.149 } 00:11:14.149 ] 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.149 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.150 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.150 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.150 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.150 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.150 "name": "Existed_Raid", 00:11:14.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.150 "strip_size_kb": 64, 00:11:14.150 "state": "configuring", 00:11:14.150 "raid_level": "concat", 00:11:14.150 "superblock": false, 00:11:14.150 "num_base_bdevs": 4, 00:11:14.150 "num_base_bdevs_discovered": 3, 00:11:14.150 "num_base_bdevs_operational": 4, 00:11:14.150 "base_bdevs_list": [ 00:11:14.150 { 00:11:14.150 "name": "BaseBdev1", 00:11:14.150 "uuid": "c6d215bd-3b85-4b9d-b894-bab35bbf91da", 00:11:14.150 "is_configured": true, 00:11:14.150 "data_offset": 0, 00:11:14.150 "data_size": 65536 00:11:14.150 }, 00:11:14.150 { 00:11:14.150 "name": null, 00:11:14.150 "uuid": "d3739f3f-412d-46f1-a9fe-588997bed8de", 00:11:14.150 "is_configured": false, 00:11:14.150 "data_offset": 0, 00:11:14.150 "data_size": 65536 00:11:14.150 }, 00:11:14.150 { 00:11:14.150 "name": "BaseBdev3", 00:11:14.150 "uuid": "4d450f73-872a-464e-8d7d-64334a5c2ddd", 00:11:14.150 "is_configured": true, 00:11:14.150 "data_offset": 0, 00:11:14.150 "data_size": 65536 00:11:14.150 }, 00:11:14.150 { 00:11:14.150 "name": "BaseBdev4", 00:11:14.150 "uuid": "e166e0cd-04a2-47ec-a20e-210346389d74", 00:11:14.150 "is_configured": true, 00:11:14.150 "data_offset": 0, 00:11:14.150 "data_size": 65536 00:11:14.150 } 00:11:14.150 ] 00:11:14.150 }' 00:11:14.150 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.150 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.715 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.715 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.715 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:14.715 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.715 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.716 [2024-11-15 10:39:35.827972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.716 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.974 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.974 "name": "Existed_Raid", 00:11:14.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.974 "strip_size_kb": 64, 00:11:14.974 "state": "configuring", 00:11:14.974 "raid_level": "concat", 00:11:14.974 "superblock": false, 00:11:14.974 "num_base_bdevs": 4, 00:11:14.974 "num_base_bdevs_discovered": 2, 00:11:14.974 "num_base_bdevs_operational": 4, 00:11:14.974 "base_bdevs_list": [ 00:11:14.974 { 00:11:14.974 "name": "BaseBdev1", 00:11:14.974 "uuid": "c6d215bd-3b85-4b9d-b894-bab35bbf91da", 00:11:14.974 "is_configured": true, 00:11:14.974 "data_offset": 0, 00:11:14.974 "data_size": 65536 00:11:14.974 }, 00:11:14.974 { 00:11:14.974 "name": null, 00:11:14.974 "uuid": "d3739f3f-412d-46f1-a9fe-588997bed8de", 00:11:14.974 "is_configured": false, 00:11:14.974 "data_offset": 0, 00:11:14.974 "data_size": 65536 00:11:14.974 }, 00:11:14.974 { 00:11:14.974 "name": null, 00:11:14.974 "uuid": "4d450f73-872a-464e-8d7d-64334a5c2ddd", 00:11:14.974 "is_configured": false, 00:11:14.974 "data_offset": 0, 00:11:14.974 "data_size": 65536 00:11:14.974 }, 00:11:14.974 { 00:11:14.974 "name": "BaseBdev4", 00:11:14.974 "uuid": "e166e0cd-04a2-47ec-a20e-210346389d74", 00:11:14.974 "is_configured": true, 00:11:14.974 "data_offset": 0, 00:11:14.974 "data_size": 65536 00:11:14.974 } 00:11:14.974 ] 00:11:14.974 }' 00:11:14.974 10:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.974 10:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.233 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.233 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.233 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.233 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:15.233 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.491 [2024-11-15 10:39:36.420087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.491 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.491 "name": "Existed_Raid", 00:11:15.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.492 "strip_size_kb": 64, 00:11:15.492 "state": "configuring", 00:11:15.492 "raid_level": "concat", 00:11:15.492 "superblock": false, 00:11:15.492 "num_base_bdevs": 4, 00:11:15.492 "num_base_bdevs_discovered": 3, 00:11:15.492 "num_base_bdevs_operational": 4, 00:11:15.492 "base_bdevs_list": [ 00:11:15.492 { 00:11:15.492 "name": "BaseBdev1", 00:11:15.492 "uuid": "c6d215bd-3b85-4b9d-b894-bab35bbf91da", 00:11:15.492 "is_configured": true, 00:11:15.492 "data_offset": 0, 00:11:15.492 "data_size": 65536 00:11:15.492 }, 00:11:15.492 { 00:11:15.492 "name": null, 00:11:15.492 "uuid": "d3739f3f-412d-46f1-a9fe-588997bed8de", 00:11:15.492 "is_configured": false, 00:11:15.492 "data_offset": 0, 00:11:15.492 "data_size": 65536 00:11:15.492 }, 00:11:15.492 { 00:11:15.492 "name": "BaseBdev3", 00:11:15.492 "uuid": "4d450f73-872a-464e-8d7d-64334a5c2ddd", 00:11:15.492 "is_configured": true, 00:11:15.492 "data_offset": 0, 00:11:15.492 "data_size": 65536 00:11:15.492 }, 00:11:15.492 { 00:11:15.492 "name": "BaseBdev4", 00:11:15.492 "uuid": "e166e0cd-04a2-47ec-a20e-210346389d74", 00:11:15.492 "is_configured": true, 00:11:15.492 "data_offset": 0, 00:11:15.492 "data_size": 65536 00:11:15.492 } 00:11:15.492 ] 00:11:15.492 }' 00:11:15.492 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.492 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.059 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.059 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.059 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:16.059 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.059 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.059 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:16.059 10:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:16.059 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.059 10:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.059 [2024-11-15 10:39:36.992282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.059 "name": "Existed_Raid", 00:11:16.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.059 "strip_size_kb": 64, 00:11:16.059 "state": "configuring", 00:11:16.059 "raid_level": "concat", 00:11:16.059 "superblock": false, 00:11:16.059 "num_base_bdevs": 4, 00:11:16.059 "num_base_bdevs_discovered": 2, 00:11:16.059 "num_base_bdevs_operational": 4, 00:11:16.059 "base_bdevs_list": [ 00:11:16.059 { 00:11:16.059 "name": null, 00:11:16.059 "uuid": "c6d215bd-3b85-4b9d-b894-bab35bbf91da", 00:11:16.059 "is_configured": false, 00:11:16.059 "data_offset": 0, 00:11:16.059 "data_size": 65536 00:11:16.059 }, 00:11:16.059 { 00:11:16.059 "name": null, 00:11:16.059 "uuid": "d3739f3f-412d-46f1-a9fe-588997bed8de", 00:11:16.059 "is_configured": false, 00:11:16.059 "data_offset": 0, 00:11:16.059 "data_size": 65536 00:11:16.059 }, 00:11:16.059 { 00:11:16.059 "name": "BaseBdev3", 00:11:16.059 "uuid": "4d450f73-872a-464e-8d7d-64334a5c2ddd", 00:11:16.059 "is_configured": true, 00:11:16.059 "data_offset": 0, 00:11:16.059 "data_size": 65536 00:11:16.059 }, 00:11:16.059 { 00:11:16.059 "name": "BaseBdev4", 00:11:16.059 "uuid": "e166e0cd-04a2-47ec-a20e-210346389d74", 00:11:16.059 "is_configured": true, 00:11:16.059 "data_offset": 0, 00:11:16.059 "data_size": 65536 00:11:16.059 } 00:11:16.059 ] 00:11:16.059 }' 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.059 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.626 [2024-11-15 10:39:37.636724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.626 "name": "Existed_Raid", 00:11:16.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.626 "strip_size_kb": 64, 00:11:16.626 "state": "configuring", 00:11:16.626 "raid_level": "concat", 00:11:16.626 "superblock": false, 00:11:16.626 "num_base_bdevs": 4, 00:11:16.626 "num_base_bdevs_discovered": 3, 00:11:16.626 "num_base_bdevs_operational": 4, 00:11:16.626 "base_bdevs_list": [ 00:11:16.626 { 00:11:16.626 "name": null, 00:11:16.626 "uuid": "c6d215bd-3b85-4b9d-b894-bab35bbf91da", 00:11:16.626 "is_configured": false, 00:11:16.626 "data_offset": 0, 00:11:16.626 "data_size": 65536 00:11:16.626 }, 00:11:16.626 { 00:11:16.626 "name": "BaseBdev2", 00:11:16.626 "uuid": "d3739f3f-412d-46f1-a9fe-588997bed8de", 00:11:16.626 "is_configured": true, 00:11:16.626 "data_offset": 0, 00:11:16.626 "data_size": 65536 00:11:16.626 }, 00:11:16.626 { 00:11:16.626 "name": "BaseBdev3", 00:11:16.626 "uuid": "4d450f73-872a-464e-8d7d-64334a5c2ddd", 00:11:16.626 "is_configured": true, 00:11:16.626 "data_offset": 0, 00:11:16.626 "data_size": 65536 00:11:16.626 }, 00:11:16.626 { 00:11:16.626 "name": "BaseBdev4", 00:11:16.626 "uuid": "e166e0cd-04a2-47ec-a20e-210346389d74", 00:11:16.626 "is_configured": true, 00:11:16.626 "data_offset": 0, 00:11:16.626 "data_size": 65536 00:11:16.626 } 00:11:16.626 ] 00:11:16.626 }' 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.626 10:39:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c6d215bd-3b85-4b9d-b894-bab35bbf91da 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.193 [2024-11-15 10:39:38.299625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:17.193 [2024-11-15 10:39:38.299702] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:17.193 [2024-11-15 10:39:38.299715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:17.193 [2024-11-15 10:39:38.300085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:17.193 [2024-11-15 10:39:38.300269] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:17.193 [2024-11-15 10:39:38.300290] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:17.193 [2024-11-15 10:39:38.300639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.193 NewBaseBdev 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.193 [ 00:11:17.193 { 00:11:17.193 "name": "NewBaseBdev", 00:11:17.193 "aliases": [ 00:11:17.193 "c6d215bd-3b85-4b9d-b894-bab35bbf91da" 00:11:17.193 ], 00:11:17.193 "product_name": "Malloc disk", 00:11:17.193 "block_size": 512, 00:11:17.193 "num_blocks": 65536, 00:11:17.193 "uuid": "c6d215bd-3b85-4b9d-b894-bab35bbf91da", 00:11:17.193 "assigned_rate_limits": { 00:11:17.193 "rw_ios_per_sec": 0, 00:11:17.193 "rw_mbytes_per_sec": 0, 00:11:17.193 "r_mbytes_per_sec": 0, 00:11:17.193 "w_mbytes_per_sec": 0 00:11:17.193 }, 00:11:17.193 "claimed": true, 00:11:17.193 "claim_type": "exclusive_write", 00:11:17.193 "zoned": false, 00:11:17.193 "supported_io_types": { 00:11:17.193 "read": true, 00:11:17.193 "write": true, 00:11:17.193 "unmap": true, 00:11:17.193 "flush": true, 00:11:17.193 "reset": true, 00:11:17.193 "nvme_admin": false, 00:11:17.193 "nvme_io": false, 00:11:17.193 "nvme_io_md": false, 00:11:17.193 "write_zeroes": true, 00:11:17.193 "zcopy": true, 00:11:17.193 "get_zone_info": false, 00:11:17.193 "zone_management": false, 00:11:17.193 "zone_append": false, 00:11:17.193 "compare": false, 00:11:17.193 "compare_and_write": false, 00:11:17.193 "abort": true, 00:11:17.193 "seek_hole": false, 00:11:17.193 "seek_data": false, 00:11:17.193 "copy": true, 00:11:17.193 "nvme_iov_md": false 00:11:17.193 }, 00:11:17.193 "memory_domains": [ 00:11:17.193 { 00:11:17.193 "dma_device_id": "system", 00:11:17.193 "dma_device_type": 1 00:11:17.193 }, 00:11:17.193 { 00:11:17.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.193 "dma_device_type": 2 00:11:17.193 } 00:11:17.193 ], 00:11:17.193 "driver_specific": {} 00:11:17.193 } 00:11:17.193 ] 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.193 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.451 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.451 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.451 "name": "Existed_Raid", 00:11:17.451 "uuid": "bf6cde30-489b-4e2d-b4e8-286918b65121", 00:11:17.451 "strip_size_kb": 64, 00:11:17.451 "state": "online", 00:11:17.451 "raid_level": "concat", 00:11:17.451 "superblock": false, 00:11:17.451 "num_base_bdevs": 4, 00:11:17.451 "num_base_bdevs_discovered": 4, 00:11:17.451 "num_base_bdevs_operational": 4, 00:11:17.451 "base_bdevs_list": [ 00:11:17.451 { 00:11:17.451 "name": "NewBaseBdev", 00:11:17.451 "uuid": "c6d215bd-3b85-4b9d-b894-bab35bbf91da", 00:11:17.451 "is_configured": true, 00:11:17.451 "data_offset": 0, 00:11:17.451 "data_size": 65536 00:11:17.451 }, 00:11:17.451 { 00:11:17.451 "name": "BaseBdev2", 00:11:17.451 "uuid": "d3739f3f-412d-46f1-a9fe-588997bed8de", 00:11:17.451 "is_configured": true, 00:11:17.451 "data_offset": 0, 00:11:17.451 "data_size": 65536 00:11:17.451 }, 00:11:17.451 { 00:11:17.451 "name": "BaseBdev3", 00:11:17.451 "uuid": "4d450f73-872a-464e-8d7d-64334a5c2ddd", 00:11:17.451 "is_configured": true, 00:11:17.451 "data_offset": 0, 00:11:17.451 "data_size": 65536 00:11:17.451 }, 00:11:17.451 { 00:11:17.451 "name": "BaseBdev4", 00:11:17.451 "uuid": "e166e0cd-04a2-47ec-a20e-210346389d74", 00:11:17.451 "is_configured": true, 00:11:17.451 "data_offset": 0, 00:11:17.451 "data_size": 65536 00:11:17.451 } 00:11:17.451 ] 00:11:17.451 }' 00:11:17.451 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.451 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.709 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:17.709 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:17.709 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.709 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.709 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.709 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.709 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:17.709 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.709 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.709 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.709 [2024-11-15 10:39:38.856253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.967 10:39:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.967 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.967 "name": "Existed_Raid", 00:11:17.967 "aliases": [ 00:11:17.967 "bf6cde30-489b-4e2d-b4e8-286918b65121" 00:11:17.967 ], 00:11:17.967 "product_name": "Raid Volume", 00:11:17.967 "block_size": 512, 00:11:17.967 "num_blocks": 262144, 00:11:17.967 "uuid": "bf6cde30-489b-4e2d-b4e8-286918b65121", 00:11:17.967 "assigned_rate_limits": { 00:11:17.967 "rw_ios_per_sec": 0, 00:11:17.967 "rw_mbytes_per_sec": 0, 00:11:17.967 "r_mbytes_per_sec": 0, 00:11:17.967 "w_mbytes_per_sec": 0 00:11:17.967 }, 00:11:17.967 "claimed": false, 00:11:17.967 "zoned": false, 00:11:17.967 "supported_io_types": { 00:11:17.967 "read": true, 00:11:17.967 "write": true, 00:11:17.967 "unmap": true, 00:11:17.967 "flush": true, 00:11:17.967 "reset": true, 00:11:17.967 "nvme_admin": false, 00:11:17.967 "nvme_io": false, 00:11:17.967 "nvme_io_md": false, 00:11:17.967 "write_zeroes": true, 00:11:17.967 "zcopy": false, 00:11:17.967 "get_zone_info": false, 00:11:17.967 "zone_management": false, 00:11:17.967 "zone_append": false, 00:11:17.967 "compare": false, 00:11:17.967 "compare_and_write": false, 00:11:17.967 "abort": false, 00:11:17.967 "seek_hole": false, 00:11:17.967 "seek_data": false, 00:11:17.967 "copy": false, 00:11:17.967 "nvme_iov_md": false 00:11:17.967 }, 00:11:17.967 "memory_domains": [ 00:11:17.967 { 00:11:17.967 "dma_device_id": "system", 00:11:17.967 "dma_device_type": 1 00:11:17.967 }, 00:11:17.967 { 00:11:17.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.967 "dma_device_type": 2 00:11:17.967 }, 00:11:17.967 { 00:11:17.967 "dma_device_id": "system", 00:11:17.967 "dma_device_type": 1 00:11:17.967 }, 00:11:17.967 { 00:11:17.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.967 "dma_device_type": 2 00:11:17.967 }, 00:11:17.967 { 00:11:17.967 "dma_device_id": "system", 00:11:17.967 "dma_device_type": 1 00:11:17.967 }, 00:11:17.967 { 00:11:17.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.967 "dma_device_type": 2 00:11:17.967 }, 00:11:17.967 { 00:11:17.967 "dma_device_id": "system", 00:11:17.967 "dma_device_type": 1 00:11:17.967 }, 00:11:17.967 { 00:11:17.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.967 "dma_device_type": 2 00:11:17.967 } 00:11:17.967 ], 00:11:17.967 "driver_specific": { 00:11:17.967 "raid": { 00:11:17.967 "uuid": "bf6cde30-489b-4e2d-b4e8-286918b65121", 00:11:17.967 "strip_size_kb": 64, 00:11:17.967 "state": "online", 00:11:17.967 "raid_level": "concat", 00:11:17.967 "superblock": false, 00:11:17.967 "num_base_bdevs": 4, 00:11:17.967 "num_base_bdevs_discovered": 4, 00:11:17.967 "num_base_bdevs_operational": 4, 00:11:17.967 "base_bdevs_list": [ 00:11:17.967 { 00:11:17.967 "name": "NewBaseBdev", 00:11:17.967 "uuid": "c6d215bd-3b85-4b9d-b894-bab35bbf91da", 00:11:17.967 "is_configured": true, 00:11:17.967 "data_offset": 0, 00:11:17.967 "data_size": 65536 00:11:17.967 }, 00:11:17.967 { 00:11:17.967 "name": "BaseBdev2", 00:11:17.967 "uuid": "d3739f3f-412d-46f1-a9fe-588997bed8de", 00:11:17.967 "is_configured": true, 00:11:17.967 "data_offset": 0, 00:11:17.967 "data_size": 65536 00:11:17.967 }, 00:11:17.967 { 00:11:17.967 "name": "BaseBdev3", 00:11:17.967 "uuid": "4d450f73-872a-464e-8d7d-64334a5c2ddd", 00:11:17.967 "is_configured": true, 00:11:17.967 "data_offset": 0, 00:11:17.967 "data_size": 65536 00:11:17.967 }, 00:11:17.967 { 00:11:17.967 "name": "BaseBdev4", 00:11:17.967 "uuid": "e166e0cd-04a2-47ec-a20e-210346389d74", 00:11:17.967 "is_configured": true, 00:11:17.967 "data_offset": 0, 00:11:17.967 "data_size": 65536 00:11:17.967 } 00:11:17.967 ] 00:11:17.967 } 00:11:17.967 } 00:11:17.967 }' 00:11:17.967 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.967 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:17.967 BaseBdev2 00:11:17.967 BaseBdev3 00:11:17.967 BaseBdev4' 00:11:17.967 10:39:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.967 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.968 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.968 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.968 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.968 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:17.968 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.968 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.968 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.225 [2024-11-15 10:39:39.223854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.225 [2024-11-15 10:39:39.223893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.225 [2024-11-15 10:39:39.223986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.225 [2024-11-15 10:39:39.224080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.225 [2024-11-15 10:39:39.224097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71369 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71369 ']' 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71369 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71369 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.225 killing process with pid 71369 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71369' 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71369 00:11:18.225 [2024-11-15 10:39:39.265041] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.225 10:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71369 00:11:18.483 [2024-11-15 10:39:39.621206] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:19.858 00:11:19.858 real 0m12.880s 00:11:19.858 user 0m21.382s 00:11:19.858 sys 0m1.792s 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.858 ************************************ 00:11:19.858 END TEST raid_state_function_test 00:11:19.858 ************************************ 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.858 10:39:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:19.858 10:39:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:19.858 10:39:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.858 10:39:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.858 ************************************ 00:11:19.858 START TEST raid_state_function_test_sb 00:11:19.858 ************************************ 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:19.858 Process raid pid: 72056 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72056 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72056' 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72056 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72056 ']' 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.858 10:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.858 [2024-11-15 10:39:40.813751] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:11:19.858 [2024-11-15 10:39:40.813927] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.859 [2024-11-15 10:39:40.998928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.117 [2024-11-15 10:39:41.125942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.377 [2024-11-15 10:39:41.332243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.377 [2024-11-15 10:39:41.332290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.636 [2024-11-15 10:39:41.775328] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.636 [2024-11-15 10:39:41.775393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.636 [2024-11-15 10:39:41.775411] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:20.636 [2024-11-15 10:39:41.775428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:20.636 [2024-11-15 10:39:41.775438] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:20.636 [2024-11-15 10:39:41.775453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:20.636 [2024-11-15 10:39:41.775463] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:20.636 [2024-11-15 10:39:41.775477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.636 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.895 10:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.895 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.895 "name": "Existed_Raid", 00:11:20.895 "uuid": "3a123176-9985-47c8-b2d8-69fa47147e90", 00:11:20.895 "strip_size_kb": 64, 00:11:20.895 "state": "configuring", 00:11:20.895 "raid_level": "concat", 00:11:20.895 "superblock": true, 00:11:20.895 "num_base_bdevs": 4, 00:11:20.895 "num_base_bdevs_discovered": 0, 00:11:20.895 "num_base_bdevs_operational": 4, 00:11:20.895 "base_bdevs_list": [ 00:11:20.895 { 00:11:20.895 "name": "BaseBdev1", 00:11:20.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.895 "is_configured": false, 00:11:20.895 "data_offset": 0, 00:11:20.895 "data_size": 0 00:11:20.895 }, 00:11:20.895 { 00:11:20.895 "name": "BaseBdev2", 00:11:20.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.895 "is_configured": false, 00:11:20.895 "data_offset": 0, 00:11:20.895 "data_size": 0 00:11:20.895 }, 00:11:20.895 { 00:11:20.895 "name": "BaseBdev3", 00:11:20.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.895 "is_configured": false, 00:11:20.895 "data_offset": 0, 00:11:20.895 "data_size": 0 00:11:20.895 }, 00:11:20.895 { 00:11:20.895 "name": "BaseBdev4", 00:11:20.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.895 "is_configured": false, 00:11:20.895 "data_offset": 0, 00:11:20.895 "data_size": 0 00:11:20.895 } 00:11:20.895 ] 00:11:20.895 }' 00:11:20.895 10:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.895 10:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.154 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.154 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.154 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.154 [2024-11-15 10:39:42.271415] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.154 [2024-11-15 10:39:42.271473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:21.154 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.154 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:21.154 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.154 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.154 [2024-11-15 10:39:42.283506] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.154 [2024-11-15 10:39:42.283590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.154 [2024-11-15 10:39:42.283607] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.154 [2024-11-15 10:39:42.283625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.154 [2024-11-15 10:39:42.283635] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.154 [2024-11-15 10:39:42.283651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.154 [2024-11-15 10:39:42.283661] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:21.154 [2024-11-15 10:39:42.283675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:21.154 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.154 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:21.154 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.154 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.413 [2024-11-15 10:39:42.328829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.413 BaseBdev1 00:11:21.413 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.413 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:21.413 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:21.413 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.413 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:21.413 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.413 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.413 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.413 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.413 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.413 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.413 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:21.413 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.413 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.413 [ 00:11:21.413 { 00:11:21.413 "name": "BaseBdev1", 00:11:21.413 "aliases": [ 00:11:21.413 "05019240-17c5-4702-b57a-bd3fc831ae21" 00:11:21.413 ], 00:11:21.413 "product_name": "Malloc disk", 00:11:21.413 "block_size": 512, 00:11:21.413 "num_blocks": 65536, 00:11:21.413 "uuid": "05019240-17c5-4702-b57a-bd3fc831ae21", 00:11:21.413 "assigned_rate_limits": { 00:11:21.413 "rw_ios_per_sec": 0, 00:11:21.413 "rw_mbytes_per_sec": 0, 00:11:21.413 "r_mbytes_per_sec": 0, 00:11:21.413 "w_mbytes_per_sec": 0 00:11:21.413 }, 00:11:21.413 "claimed": true, 00:11:21.413 "claim_type": "exclusive_write", 00:11:21.413 "zoned": false, 00:11:21.413 "supported_io_types": { 00:11:21.413 "read": true, 00:11:21.413 "write": true, 00:11:21.413 "unmap": true, 00:11:21.413 "flush": true, 00:11:21.413 "reset": true, 00:11:21.413 "nvme_admin": false, 00:11:21.413 "nvme_io": false, 00:11:21.413 "nvme_io_md": false, 00:11:21.413 "write_zeroes": true, 00:11:21.413 "zcopy": true, 00:11:21.413 "get_zone_info": false, 00:11:21.413 "zone_management": false, 00:11:21.413 "zone_append": false, 00:11:21.413 "compare": false, 00:11:21.413 "compare_and_write": false, 00:11:21.413 "abort": true, 00:11:21.413 "seek_hole": false, 00:11:21.413 "seek_data": false, 00:11:21.413 "copy": true, 00:11:21.413 "nvme_iov_md": false 00:11:21.413 }, 00:11:21.413 "memory_domains": [ 00:11:21.413 { 00:11:21.413 "dma_device_id": "system", 00:11:21.413 "dma_device_type": 1 00:11:21.413 }, 00:11:21.413 { 00:11:21.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.413 "dma_device_type": 2 00:11:21.413 } 00:11:21.413 ], 00:11:21.413 "driver_specific": {} 00:11:21.414 } 00:11:21.414 ] 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.414 "name": "Existed_Raid", 00:11:21.414 "uuid": "10604f07-66eb-4c5d-abea-0a79ff4c392d", 00:11:21.414 "strip_size_kb": 64, 00:11:21.414 "state": "configuring", 00:11:21.414 "raid_level": "concat", 00:11:21.414 "superblock": true, 00:11:21.414 "num_base_bdevs": 4, 00:11:21.414 "num_base_bdevs_discovered": 1, 00:11:21.414 "num_base_bdevs_operational": 4, 00:11:21.414 "base_bdevs_list": [ 00:11:21.414 { 00:11:21.414 "name": "BaseBdev1", 00:11:21.414 "uuid": "05019240-17c5-4702-b57a-bd3fc831ae21", 00:11:21.414 "is_configured": true, 00:11:21.414 "data_offset": 2048, 00:11:21.414 "data_size": 63488 00:11:21.414 }, 00:11:21.414 { 00:11:21.414 "name": "BaseBdev2", 00:11:21.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.414 "is_configured": false, 00:11:21.414 "data_offset": 0, 00:11:21.414 "data_size": 0 00:11:21.414 }, 00:11:21.414 { 00:11:21.414 "name": "BaseBdev3", 00:11:21.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.414 "is_configured": false, 00:11:21.414 "data_offset": 0, 00:11:21.414 "data_size": 0 00:11:21.414 }, 00:11:21.414 { 00:11:21.414 "name": "BaseBdev4", 00:11:21.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.414 "is_configured": false, 00:11:21.414 "data_offset": 0, 00:11:21.414 "data_size": 0 00:11:21.414 } 00:11:21.414 ] 00:11:21.414 }' 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.414 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.979 [2024-11-15 10:39:42.869054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.979 [2024-11-15 10:39:42.869135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.979 [2024-11-15 10:39:42.877137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.979 [2024-11-15 10:39:42.879603] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.979 [2024-11-15 10:39:42.879661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.979 [2024-11-15 10:39:42.879679] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.979 [2024-11-15 10:39:42.879697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.979 [2024-11-15 10:39:42.879708] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:21.979 [2024-11-15 10:39:42.879722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.979 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.980 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.980 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.980 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.980 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.980 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.980 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.980 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.980 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.980 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.980 "name": "Existed_Raid", 00:11:21.980 "uuid": "99bae57f-68aa-4652-a87c-4ff5f4ebc72f", 00:11:21.980 "strip_size_kb": 64, 00:11:21.980 "state": "configuring", 00:11:21.980 "raid_level": "concat", 00:11:21.980 "superblock": true, 00:11:21.980 "num_base_bdevs": 4, 00:11:21.980 "num_base_bdevs_discovered": 1, 00:11:21.980 "num_base_bdevs_operational": 4, 00:11:21.980 "base_bdevs_list": [ 00:11:21.980 { 00:11:21.980 "name": "BaseBdev1", 00:11:21.980 "uuid": "05019240-17c5-4702-b57a-bd3fc831ae21", 00:11:21.980 "is_configured": true, 00:11:21.980 "data_offset": 2048, 00:11:21.980 "data_size": 63488 00:11:21.980 }, 00:11:21.980 { 00:11:21.980 "name": "BaseBdev2", 00:11:21.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.980 "is_configured": false, 00:11:21.980 "data_offset": 0, 00:11:21.980 "data_size": 0 00:11:21.980 }, 00:11:21.980 { 00:11:21.980 "name": "BaseBdev3", 00:11:21.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.980 "is_configured": false, 00:11:21.980 "data_offset": 0, 00:11:21.980 "data_size": 0 00:11:21.980 }, 00:11:21.980 { 00:11:21.980 "name": "BaseBdev4", 00:11:21.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.980 "is_configured": false, 00:11:21.980 "data_offset": 0, 00:11:21.980 "data_size": 0 00:11:21.980 } 00:11:21.980 ] 00:11:21.980 }' 00:11:21.980 10:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.980 10:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.238 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:22.238 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.238 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.497 [2024-11-15 10:39:43.416410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.497 BaseBdev2 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.497 [ 00:11:22.497 { 00:11:22.497 "name": "BaseBdev2", 00:11:22.497 "aliases": [ 00:11:22.497 "e8a0c108-68d2-4990-9d7d-e366a87dabd3" 00:11:22.497 ], 00:11:22.497 "product_name": "Malloc disk", 00:11:22.497 "block_size": 512, 00:11:22.497 "num_blocks": 65536, 00:11:22.497 "uuid": "e8a0c108-68d2-4990-9d7d-e366a87dabd3", 00:11:22.497 "assigned_rate_limits": { 00:11:22.497 "rw_ios_per_sec": 0, 00:11:22.497 "rw_mbytes_per_sec": 0, 00:11:22.497 "r_mbytes_per_sec": 0, 00:11:22.497 "w_mbytes_per_sec": 0 00:11:22.497 }, 00:11:22.497 "claimed": true, 00:11:22.497 "claim_type": "exclusive_write", 00:11:22.497 "zoned": false, 00:11:22.497 "supported_io_types": { 00:11:22.497 "read": true, 00:11:22.497 "write": true, 00:11:22.497 "unmap": true, 00:11:22.497 "flush": true, 00:11:22.497 "reset": true, 00:11:22.497 "nvme_admin": false, 00:11:22.497 "nvme_io": false, 00:11:22.497 "nvme_io_md": false, 00:11:22.497 "write_zeroes": true, 00:11:22.497 "zcopy": true, 00:11:22.497 "get_zone_info": false, 00:11:22.497 "zone_management": false, 00:11:22.497 "zone_append": false, 00:11:22.497 "compare": false, 00:11:22.497 "compare_and_write": false, 00:11:22.497 "abort": true, 00:11:22.497 "seek_hole": false, 00:11:22.497 "seek_data": false, 00:11:22.497 "copy": true, 00:11:22.497 "nvme_iov_md": false 00:11:22.497 }, 00:11:22.497 "memory_domains": [ 00:11:22.497 { 00:11:22.497 "dma_device_id": "system", 00:11:22.497 "dma_device_type": 1 00:11:22.497 }, 00:11:22.497 { 00:11:22.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.497 "dma_device_type": 2 00:11:22.497 } 00:11:22.497 ], 00:11:22.497 "driver_specific": {} 00:11:22.497 } 00:11:22.497 ] 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.497 "name": "Existed_Raid", 00:11:22.497 "uuid": "99bae57f-68aa-4652-a87c-4ff5f4ebc72f", 00:11:22.497 "strip_size_kb": 64, 00:11:22.497 "state": "configuring", 00:11:22.497 "raid_level": "concat", 00:11:22.497 "superblock": true, 00:11:22.497 "num_base_bdevs": 4, 00:11:22.497 "num_base_bdevs_discovered": 2, 00:11:22.497 "num_base_bdevs_operational": 4, 00:11:22.497 "base_bdevs_list": [ 00:11:22.497 { 00:11:22.497 "name": "BaseBdev1", 00:11:22.497 "uuid": "05019240-17c5-4702-b57a-bd3fc831ae21", 00:11:22.497 "is_configured": true, 00:11:22.497 "data_offset": 2048, 00:11:22.497 "data_size": 63488 00:11:22.497 }, 00:11:22.497 { 00:11:22.497 "name": "BaseBdev2", 00:11:22.497 "uuid": "e8a0c108-68d2-4990-9d7d-e366a87dabd3", 00:11:22.497 "is_configured": true, 00:11:22.497 "data_offset": 2048, 00:11:22.497 "data_size": 63488 00:11:22.497 }, 00:11:22.497 { 00:11:22.497 "name": "BaseBdev3", 00:11:22.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.497 "is_configured": false, 00:11:22.497 "data_offset": 0, 00:11:22.497 "data_size": 0 00:11:22.497 }, 00:11:22.497 { 00:11:22.497 "name": "BaseBdev4", 00:11:22.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.497 "is_configured": false, 00:11:22.497 "data_offset": 0, 00:11:22.497 "data_size": 0 00:11:22.497 } 00:11:22.497 ] 00:11:22.497 }' 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.497 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.064 10:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:23.064 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.064 10:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.064 [2024-11-15 10:39:44.035769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.064 BaseBdev3 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.064 [ 00:11:23.064 { 00:11:23.064 "name": "BaseBdev3", 00:11:23.064 "aliases": [ 00:11:23.064 "3113bf8c-98e9-4200-9a91-0bbc8a8e4ecc" 00:11:23.064 ], 00:11:23.064 "product_name": "Malloc disk", 00:11:23.064 "block_size": 512, 00:11:23.064 "num_blocks": 65536, 00:11:23.064 "uuid": "3113bf8c-98e9-4200-9a91-0bbc8a8e4ecc", 00:11:23.064 "assigned_rate_limits": { 00:11:23.064 "rw_ios_per_sec": 0, 00:11:23.064 "rw_mbytes_per_sec": 0, 00:11:23.064 "r_mbytes_per_sec": 0, 00:11:23.064 "w_mbytes_per_sec": 0 00:11:23.064 }, 00:11:23.064 "claimed": true, 00:11:23.064 "claim_type": "exclusive_write", 00:11:23.064 "zoned": false, 00:11:23.064 "supported_io_types": { 00:11:23.064 "read": true, 00:11:23.064 "write": true, 00:11:23.064 "unmap": true, 00:11:23.064 "flush": true, 00:11:23.064 "reset": true, 00:11:23.064 "nvme_admin": false, 00:11:23.064 "nvme_io": false, 00:11:23.064 "nvme_io_md": false, 00:11:23.064 "write_zeroes": true, 00:11:23.064 "zcopy": true, 00:11:23.064 "get_zone_info": false, 00:11:23.064 "zone_management": false, 00:11:23.064 "zone_append": false, 00:11:23.064 "compare": false, 00:11:23.064 "compare_and_write": false, 00:11:23.064 "abort": true, 00:11:23.064 "seek_hole": false, 00:11:23.064 "seek_data": false, 00:11:23.064 "copy": true, 00:11:23.064 "nvme_iov_md": false 00:11:23.064 }, 00:11:23.064 "memory_domains": [ 00:11:23.064 { 00:11:23.064 "dma_device_id": "system", 00:11:23.064 "dma_device_type": 1 00:11:23.064 }, 00:11:23.064 { 00:11:23.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.064 "dma_device_type": 2 00:11:23.064 } 00:11:23.064 ], 00:11:23.064 "driver_specific": {} 00:11:23.064 } 00:11:23.064 ] 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.064 "name": "Existed_Raid", 00:11:23.064 "uuid": "99bae57f-68aa-4652-a87c-4ff5f4ebc72f", 00:11:23.064 "strip_size_kb": 64, 00:11:23.064 "state": "configuring", 00:11:23.064 "raid_level": "concat", 00:11:23.064 "superblock": true, 00:11:23.064 "num_base_bdevs": 4, 00:11:23.064 "num_base_bdevs_discovered": 3, 00:11:23.064 "num_base_bdevs_operational": 4, 00:11:23.064 "base_bdevs_list": [ 00:11:23.064 { 00:11:23.064 "name": "BaseBdev1", 00:11:23.064 "uuid": "05019240-17c5-4702-b57a-bd3fc831ae21", 00:11:23.064 "is_configured": true, 00:11:23.064 "data_offset": 2048, 00:11:23.064 "data_size": 63488 00:11:23.064 }, 00:11:23.064 { 00:11:23.064 "name": "BaseBdev2", 00:11:23.064 "uuid": "e8a0c108-68d2-4990-9d7d-e366a87dabd3", 00:11:23.064 "is_configured": true, 00:11:23.064 "data_offset": 2048, 00:11:23.064 "data_size": 63488 00:11:23.064 }, 00:11:23.064 { 00:11:23.064 "name": "BaseBdev3", 00:11:23.064 "uuid": "3113bf8c-98e9-4200-9a91-0bbc8a8e4ecc", 00:11:23.064 "is_configured": true, 00:11:23.064 "data_offset": 2048, 00:11:23.064 "data_size": 63488 00:11:23.064 }, 00:11:23.064 { 00:11:23.064 "name": "BaseBdev4", 00:11:23.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.064 "is_configured": false, 00:11:23.064 "data_offset": 0, 00:11:23.064 "data_size": 0 00:11:23.064 } 00:11:23.064 ] 00:11:23.064 }' 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.064 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.630 [2024-11-15 10:39:44.622118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:23.630 [2024-11-15 10:39:44.622437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:23.630 [2024-11-15 10:39:44.622458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:23.630 BaseBdev4 00:11:23.630 [2024-11-15 10:39:44.622823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:23.630 [2024-11-15 10:39:44.623029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:23.630 [2024-11-15 10:39:44.623054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:23.630 [2024-11-15 10:39:44.623230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.630 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.630 [ 00:11:23.630 { 00:11:23.630 "name": "BaseBdev4", 00:11:23.630 "aliases": [ 00:11:23.630 "f818a785-d4f4-464b-8c07-4903673325c9" 00:11:23.630 ], 00:11:23.630 "product_name": "Malloc disk", 00:11:23.630 "block_size": 512, 00:11:23.630 "num_blocks": 65536, 00:11:23.630 "uuid": "f818a785-d4f4-464b-8c07-4903673325c9", 00:11:23.630 "assigned_rate_limits": { 00:11:23.630 "rw_ios_per_sec": 0, 00:11:23.630 "rw_mbytes_per_sec": 0, 00:11:23.630 "r_mbytes_per_sec": 0, 00:11:23.630 "w_mbytes_per_sec": 0 00:11:23.630 }, 00:11:23.630 "claimed": true, 00:11:23.630 "claim_type": "exclusive_write", 00:11:23.630 "zoned": false, 00:11:23.630 "supported_io_types": { 00:11:23.630 "read": true, 00:11:23.630 "write": true, 00:11:23.630 "unmap": true, 00:11:23.630 "flush": true, 00:11:23.630 "reset": true, 00:11:23.630 "nvme_admin": false, 00:11:23.630 "nvme_io": false, 00:11:23.630 "nvme_io_md": false, 00:11:23.630 "write_zeroes": true, 00:11:23.630 "zcopy": true, 00:11:23.630 "get_zone_info": false, 00:11:23.630 "zone_management": false, 00:11:23.630 "zone_append": false, 00:11:23.630 "compare": false, 00:11:23.630 "compare_and_write": false, 00:11:23.630 "abort": true, 00:11:23.630 "seek_hole": false, 00:11:23.630 "seek_data": false, 00:11:23.630 "copy": true, 00:11:23.630 "nvme_iov_md": false 00:11:23.630 }, 00:11:23.630 "memory_domains": [ 00:11:23.631 { 00:11:23.631 "dma_device_id": "system", 00:11:23.631 "dma_device_type": 1 00:11:23.631 }, 00:11:23.631 { 00:11:23.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.631 "dma_device_type": 2 00:11:23.631 } 00:11:23.631 ], 00:11:23.631 "driver_specific": {} 00:11:23.631 } 00:11:23.631 ] 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.631 "name": "Existed_Raid", 00:11:23.631 "uuid": "99bae57f-68aa-4652-a87c-4ff5f4ebc72f", 00:11:23.631 "strip_size_kb": 64, 00:11:23.631 "state": "online", 00:11:23.631 "raid_level": "concat", 00:11:23.631 "superblock": true, 00:11:23.631 "num_base_bdevs": 4, 00:11:23.631 "num_base_bdevs_discovered": 4, 00:11:23.631 "num_base_bdevs_operational": 4, 00:11:23.631 "base_bdevs_list": [ 00:11:23.631 { 00:11:23.631 "name": "BaseBdev1", 00:11:23.631 "uuid": "05019240-17c5-4702-b57a-bd3fc831ae21", 00:11:23.631 "is_configured": true, 00:11:23.631 "data_offset": 2048, 00:11:23.631 "data_size": 63488 00:11:23.631 }, 00:11:23.631 { 00:11:23.631 "name": "BaseBdev2", 00:11:23.631 "uuid": "e8a0c108-68d2-4990-9d7d-e366a87dabd3", 00:11:23.631 "is_configured": true, 00:11:23.631 "data_offset": 2048, 00:11:23.631 "data_size": 63488 00:11:23.631 }, 00:11:23.631 { 00:11:23.631 "name": "BaseBdev3", 00:11:23.631 "uuid": "3113bf8c-98e9-4200-9a91-0bbc8a8e4ecc", 00:11:23.631 "is_configured": true, 00:11:23.631 "data_offset": 2048, 00:11:23.631 "data_size": 63488 00:11:23.631 }, 00:11:23.631 { 00:11:23.631 "name": "BaseBdev4", 00:11:23.631 "uuid": "f818a785-d4f4-464b-8c07-4903673325c9", 00:11:23.631 "is_configured": true, 00:11:23.631 "data_offset": 2048, 00:11:23.631 "data_size": 63488 00:11:23.631 } 00:11:23.631 ] 00:11:23.631 }' 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.631 10:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.197 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:24.197 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:24.197 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.197 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.197 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.197 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.197 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.197 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:24.197 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.197 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.197 [2024-11-15 10:39:45.182774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.197 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.197 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.197 "name": "Existed_Raid", 00:11:24.197 "aliases": [ 00:11:24.197 "99bae57f-68aa-4652-a87c-4ff5f4ebc72f" 00:11:24.197 ], 00:11:24.197 "product_name": "Raid Volume", 00:11:24.197 "block_size": 512, 00:11:24.197 "num_blocks": 253952, 00:11:24.197 "uuid": "99bae57f-68aa-4652-a87c-4ff5f4ebc72f", 00:11:24.197 "assigned_rate_limits": { 00:11:24.197 "rw_ios_per_sec": 0, 00:11:24.197 "rw_mbytes_per_sec": 0, 00:11:24.197 "r_mbytes_per_sec": 0, 00:11:24.197 "w_mbytes_per_sec": 0 00:11:24.197 }, 00:11:24.197 "claimed": false, 00:11:24.197 "zoned": false, 00:11:24.197 "supported_io_types": { 00:11:24.197 "read": true, 00:11:24.197 "write": true, 00:11:24.197 "unmap": true, 00:11:24.197 "flush": true, 00:11:24.197 "reset": true, 00:11:24.197 "nvme_admin": false, 00:11:24.197 "nvme_io": false, 00:11:24.197 "nvme_io_md": false, 00:11:24.197 "write_zeroes": true, 00:11:24.197 "zcopy": false, 00:11:24.197 "get_zone_info": false, 00:11:24.197 "zone_management": false, 00:11:24.197 "zone_append": false, 00:11:24.197 "compare": false, 00:11:24.197 "compare_and_write": false, 00:11:24.197 "abort": false, 00:11:24.197 "seek_hole": false, 00:11:24.197 "seek_data": false, 00:11:24.197 "copy": false, 00:11:24.197 "nvme_iov_md": false 00:11:24.197 }, 00:11:24.197 "memory_domains": [ 00:11:24.197 { 00:11:24.197 "dma_device_id": "system", 00:11:24.197 "dma_device_type": 1 00:11:24.197 }, 00:11:24.197 { 00:11:24.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.197 "dma_device_type": 2 00:11:24.197 }, 00:11:24.197 { 00:11:24.197 "dma_device_id": "system", 00:11:24.197 "dma_device_type": 1 00:11:24.197 }, 00:11:24.197 { 00:11:24.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.197 "dma_device_type": 2 00:11:24.197 }, 00:11:24.197 { 00:11:24.197 "dma_device_id": "system", 00:11:24.197 "dma_device_type": 1 00:11:24.197 }, 00:11:24.197 { 00:11:24.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.197 "dma_device_type": 2 00:11:24.197 }, 00:11:24.197 { 00:11:24.197 "dma_device_id": "system", 00:11:24.197 "dma_device_type": 1 00:11:24.197 }, 00:11:24.197 { 00:11:24.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.197 "dma_device_type": 2 00:11:24.197 } 00:11:24.197 ], 00:11:24.197 "driver_specific": { 00:11:24.197 "raid": { 00:11:24.197 "uuid": "99bae57f-68aa-4652-a87c-4ff5f4ebc72f", 00:11:24.197 "strip_size_kb": 64, 00:11:24.197 "state": "online", 00:11:24.197 "raid_level": "concat", 00:11:24.197 "superblock": true, 00:11:24.197 "num_base_bdevs": 4, 00:11:24.197 "num_base_bdevs_discovered": 4, 00:11:24.197 "num_base_bdevs_operational": 4, 00:11:24.197 "base_bdevs_list": [ 00:11:24.197 { 00:11:24.197 "name": "BaseBdev1", 00:11:24.197 "uuid": "05019240-17c5-4702-b57a-bd3fc831ae21", 00:11:24.197 "is_configured": true, 00:11:24.197 "data_offset": 2048, 00:11:24.197 "data_size": 63488 00:11:24.197 }, 00:11:24.197 { 00:11:24.197 "name": "BaseBdev2", 00:11:24.197 "uuid": "e8a0c108-68d2-4990-9d7d-e366a87dabd3", 00:11:24.197 "is_configured": true, 00:11:24.197 "data_offset": 2048, 00:11:24.197 "data_size": 63488 00:11:24.197 }, 00:11:24.197 { 00:11:24.197 "name": "BaseBdev3", 00:11:24.197 "uuid": "3113bf8c-98e9-4200-9a91-0bbc8a8e4ecc", 00:11:24.197 "is_configured": true, 00:11:24.197 "data_offset": 2048, 00:11:24.197 "data_size": 63488 00:11:24.197 }, 00:11:24.197 { 00:11:24.197 "name": "BaseBdev4", 00:11:24.197 "uuid": "f818a785-d4f4-464b-8c07-4903673325c9", 00:11:24.197 "is_configured": true, 00:11:24.197 "data_offset": 2048, 00:11:24.197 "data_size": 63488 00:11:24.197 } 00:11:24.197 ] 00:11:24.197 } 00:11:24.197 } 00:11:24.197 }' 00:11:24.197 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.198 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:24.198 BaseBdev2 00:11:24.198 BaseBdev3 00:11:24.198 BaseBdev4' 00:11:24.198 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.198 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.198 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.198 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:24.198 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.198 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.198 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.198 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.484 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.484 [2024-11-15 10:39:45.554468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:24.484 [2024-11-15 10:39:45.554534] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.484 [2024-11-15 10:39:45.554603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.743 "name": "Existed_Raid", 00:11:24.743 "uuid": "99bae57f-68aa-4652-a87c-4ff5f4ebc72f", 00:11:24.743 "strip_size_kb": 64, 00:11:24.743 "state": "offline", 00:11:24.743 "raid_level": "concat", 00:11:24.743 "superblock": true, 00:11:24.743 "num_base_bdevs": 4, 00:11:24.743 "num_base_bdevs_discovered": 3, 00:11:24.743 "num_base_bdevs_operational": 3, 00:11:24.743 "base_bdevs_list": [ 00:11:24.743 { 00:11:24.743 "name": null, 00:11:24.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.743 "is_configured": false, 00:11:24.743 "data_offset": 0, 00:11:24.743 "data_size": 63488 00:11:24.743 }, 00:11:24.743 { 00:11:24.743 "name": "BaseBdev2", 00:11:24.743 "uuid": "e8a0c108-68d2-4990-9d7d-e366a87dabd3", 00:11:24.743 "is_configured": true, 00:11:24.743 "data_offset": 2048, 00:11:24.743 "data_size": 63488 00:11:24.743 }, 00:11:24.743 { 00:11:24.743 "name": "BaseBdev3", 00:11:24.743 "uuid": "3113bf8c-98e9-4200-9a91-0bbc8a8e4ecc", 00:11:24.743 "is_configured": true, 00:11:24.743 "data_offset": 2048, 00:11:24.743 "data_size": 63488 00:11:24.743 }, 00:11:24.743 { 00:11:24.743 "name": "BaseBdev4", 00:11:24.743 "uuid": "f818a785-d4f4-464b-8c07-4903673325c9", 00:11:24.743 "is_configured": true, 00:11:24.743 "data_offset": 2048, 00:11:24.743 "data_size": 63488 00:11:24.743 } 00:11:24.743 ] 00:11:24.743 }' 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.743 10:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.001 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:25.001 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.001 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.001 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.001 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.001 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.258 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.258 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.258 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.258 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:25.258 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.258 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.258 [2024-11-15 10:39:46.199749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.258 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.258 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.258 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.258 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.259 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.259 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.259 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.259 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.259 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.259 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.259 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:25.259 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.259 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.259 [2024-11-15 10:39:46.340563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 [2024-11-15 10:39:46.487647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:25.517 [2024-11-15 10:39:46.487710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 BaseBdev2 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.517 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.776 [ 00:11:25.776 { 00:11:25.776 "name": "BaseBdev2", 00:11:25.776 "aliases": [ 00:11:25.776 "25c364c3-d6ee-44bf-a7c5-4164e629e729" 00:11:25.776 ], 00:11:25.776 "product_name": "Malloc disk", 00:11:25.776 "block_size": 512, 00:11:25.776 "num_blocks": 65536, 00:11:25.776 "uuid": "25c364c3-d6ee-44bf-a7c5-4164e629e729", 00:11:25.776 "assigned_rate_limits": { 00:11:25.776 "rw_ios_per_sec": 0, 00:11:25.776 "rw_mbytes_per_sec": 0, 00:11:25.776 "r_mbytes_per_sec": 0, 00:11:25.776 "w_mbytes_per_sec": 0 00:11:25.776 }, 00:11:25.776 "claimed": false, 00:11:25.776 "zoned": false, 00:11:25.776 "supported_io_types": { 00:11:25.776 "read": true, 00:11:25.776 "write": true, 00:11:25.776 "unmap": true, 00:11:25.776 "flush": true, 00:11:25.776 "reset": true, 00:11:25.776 "nvme_admin": false, 00:11:25.776 "nvme_io": false, 00:11:25.776 "nvme_io_md": false, 00:11:25.776 "write_zeroes": true, 00:11:25.776 "zcopy": true, 00:11:25.776 "get_zone_info": false, 00:11:25.776 "zone_management": false, 00:11:25.776 "zone_append": false, 00:11:25.776 "compare": false, 00:11:25.776 "compare_and_write": false, 00:11:25.776 "abort": true, 00:11:25.776 "seek_hole": false, 00:11:25.776 "seek_data": false, 00:11:25.776 "copy": true, 00:11:25.776 "nvme_iov_md": false 00:11:25.776 }, 00:11:25.776 "memory_domains": [ 00:11:25.776 { 00:11:25.776 "dma_device_id": "system", 00:11:25.776 "dma_device_type": 1 00:11:25.776 }, 00:11:25.776 { 00:11:25.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.776 "dma_device_type": 2 00:11:25.776 } 00:11:25.776 ], 00:11:25.776 "driver_specific": {} 00:11:25.776 } 00:11:25.776 ] 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.776 BaseBdev3 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.776 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.777 [ 00:11:25.777 { 00:11:25.777 "name": "BaseBdev3", 00:11:25.777 "aliases": [ 00:11:25.777 "4a8a386e-8729-42b4-aa40-019d027fcf5d" 00:11:25.777 ], 00:11:25.777 "product_name": "Malloc disk", 00:11:25.777 "block_size": 512, 00:11:25.777 "num_blocks": 65536, 00:11:25.777 "uuid": "4a8a386e-8729-42b4-aa40-019d027fcf5d", 00:11:25.777 "assigned_rate_limits": { 00:11:25.777 "rw_ios_per_sec": 0, 00:11:25.777 "rw_mbytes_per_sec": 0, 00:11:25.777 "r_mbytes_per_sec": 0, 00:11:25.777 "w_mbytes_per_sec": 0 00:11:25.777 }, 00:11:25.777 "claimed": false, 00:11:25.777 "zoned": false, 00:11:25.777 "supported_io_types": { 00:11:25.777 "read": true, 00:11:25.777 "write": true, 00:11:25.777 "unmap": true, 00:11:25.777 "flush": true, 00:11:25.777 "reset": true, 00:11:25.777 "nvme_admin": false, 00:11:25.777 "nvme_io": false, 00:11:25.777 "nvme_io_md": false, 00:11:25.777 "write_zeroes": true, 00:11:25.777 "zcopy": true, 00:11:25.777 "get_zone_info": false, 00:11:25.777 "zone_management": false, 00:11:25.777 "zone_append": false, 00:11:25.777 "compare": false, 00:11:25.777 "compare_and_write": false, 00:11:25.777 "abort": true, 00:11:25.777 "seek_hole": false, 00:11:25.777 "seek_data": false, 00:11:25.777 "copy": true, 00:11:25.777 "nvme_iov_md": false 00:11:25.777 }, 00:11:25.777 "memory_domains": [ 00:11:25.777 { 00:11:25.777 "dma_device_id": "system", 00:11:25.777 "dma_device_type": 1 00:11:25.777 }, 00:11:25.777 { 00:11:25.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.777 "dma_device_type": 2 00:11:25.777 } 00:11:25.777 ], 00:11:25.777 "driver_specific": {} 00:11:25.777 } 00:11:25.777 ] 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.777 BaseBdev4 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.777 [ 00:11:25.777 { 00:11:25.777 "name": "BaseBdev4", 00:11:25.777 "aliases": [ 00:11:25.777 "2870cd98-d75d-4dd8-a17a-706d9c0c624c" 00:11:25.777 ], 00:11:25.777 "product_name": "Malloc disk", 00:11:25.777 "block_size": 512, 00:11:25.777 "num_blocks": 65536, 00:11:25.777 "uuid": "2870cd98-d75d-4dd8-a17a-706d9c0c624c", 00:11:25.777 "assigned_rate_limits": { 00:11:25.777 "rw_ios_per_sec": 0, 00:11:25.777 "rw_mbytes_per_sec": 0, 00:11:25.777 "r_mbytes_per_sec": 0, 00:11:25.777 "w_mbytes_per_sec": 0 00:11:25.777 }, 00:11:25.777 "claimed": false, 00:11:25.777 "zoned": false, 00:11:25.777 "supported_io_types": { 00:11:25.777 "read": true, 00:11:25.777 "write": true, 00:11:25.777 "unmap": true, 00:11:25.777 "flush": true, 00:11:25.777 "reset": true, 00:11:25.777 "nvme_admin": false, 00:11:25.777 "nvme_io": false, 00:11:25.777 "nvme_io_md": false, 00:11:25.777 "write_zeroes": true, 00:11:25.777 "zcopy": true, 00:11:25.777 "get_zone_info": false, 00:11:25.777 "zone_management": false, 00:11:25.777 "zone_append": false, 00:11:25.777 "compare": false, 00:11:25.777 "compare_and_write": false, 00:11:25.777 "abort": true, 00:11:25.777 "seek_hole": false, 00:11:25.777 "seek_data": false, 00:11:25.777 "copy": true, 00:11:25.777 "nvme_iov_md": false 00:11:25.777 }, 00:11:25.777 "memory_domains": [ 00:11:25.777 { 00:11:25.777 "dma_device_id": "system", 00:11:25.777 "dma_device_type": 1 00:11:25.777 }, 00:11:25.777 { 00:11:25.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.777 "dma_device_type": 2 00:11:25.777 } 00:11:25.777 ], 00:11:25.777 "driver_specific": {} 00:11:25.777 } 00:11:25.777 ] 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.777 [2024-11-15 10:39:46.850710] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.777 [2024-11-15 10:39:46.850895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.777 [2024-11-15 10:39:46.851034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.777 [2024-11-15 10:39:46.853560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.777 [2024-11-15 10:39:46.853764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.777 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.777 "name": "Existed_Raid", 00:11:25.777 "uuid": "f7ec15e3-3ac7-44fb-975a-25f4d3d27e40", 00:11:25.777 "strip_size_kb": 64, 00:11:25.777 "state": "configuring", 00:11:25.777 "raid_level": "concat", 00:11:25.777 "superblock": true, 00:11:25.777 "num_base_bdevs": 4, 00:11:25.777 "num_base_bdevs_discovered": 3, 00:11:25.777 "num_base_bdevs_operational": 4, 00:11:25.777 "base_bdevs_list": [ 00:11:25.777 { 00:11:25.777 "name": "BaseBdev1", 00:11:25.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.777 "is_configured": false, 00:11:25.777 "data_offset": 0, 00:11:25.777 "data_size": 0 00:11:25.777 }, 00:11:25.777 { 00:11:25.777 "name": "BaseBdev2", 00:11:25.777 "uuid": "25c364c3-d6ee-44bf-a7c5-4164e629e729", 00:11:25.777 "is_configured": true, 00:11:25.777 "data_offset": 2048, 00:11:25.777 "data_size": 63488 00:11:25.777 }, 00:11:25.777 { 00:11:25.777 "name": "BaseBdev3", 00:11:25.777 "uuid": "4a8a386e-8729-42b4-aa40-019d027fcf5d", 00:11:25.777 "is_configured": true, 00:11:25.777 "data_offset": 2048, 00:11:25.777 "data_size": 63488 00:11:25.777 }, 00:11:25.777 { 00:11:25.777 "name": "BaseBdev4", 00:11:25.777 "uuid": "2870cd98-d75d-4dd8-a17a-706d9c0c624c", 00:11:25.777 "is_configured": true, 00:11:25.778 "data_offset": 2048, 00:11:25.778 "data_size": 63488 00:11:25.778 } 00:11:25.778 ] 00:11:25.778 }' 00:11:25.778 10:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.778 10:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.344 [2024-11-15 10:39:47.370839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.344 "name": "Existed_Raid", 00:11:26.344 "uuid": "f7ec15e3-3ac7-44fb-975a-25f4d3d27e40", 00:11:26.344 "strip_size_kb": 64, 00:11:26.344 "state": "configuring", 00:11:26.344 "raid_level": "concat", 00:11:26.344 "superblock": true, 00:11:26.344 "num_base_bdevs": 4, 00:11:26.344 "num_base_bdevs_discovered": 2, 00:11:26.344 "num_base_bdevs_operational": 4, 00:11:26.344 "base_bdevs_list": [ 00:11:26.344 { 00:11:26.344 "name": "BaseBdev1", 00:11:26.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.344 "is_configured": false, 00:11:26.344 "data_offset": 0, 00:11:26.344 "data_size": 0 00:11:26.344 }, 00:11:26.344 { 00:11:26.344 "name": null, 00:11:26.344 "uuid": "25c364c3-d6ee-44bf-a7c5-4164e629e729", 00:11:26.344 "is_configured": false, 00:11:26.344 "data_offset": 0, 00:11:26.344 "data_size": 63488 00:11:26.344 }, 00:11:26.344 { 00:11:26.344 "name": "BaseBdev3", 00:11:26.344 "uuid": "4a8a386e-8729-42b4-aa40-019d027fcf5d", 00:11:26.344 "is_configured": true, 00:11:26.344 "data_offset": 2048, 00:11:26.344 "data_size": 63488 00:11:26.344 }, 00:11:26.344 { 00:11:26.344 "name": "BaseBdev4", 00:11:26.344 "uuid": "2870cd98-d75d-4dd8-a17a-706d9c0c624c", 00:11:26.344 "is_configured": true, 00:11:26.344 "data_offset": 2048, 00:11:26.344 "data_size": 63488 00:11:26.344 } 00:11:26.344 ] 00:11:26.344 }' 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.344 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.910 [2024-11-15 10:39:47.941387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.910 BaseBdev1 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.910 [ 00:11:26.910 { 00:11:26.910 "name": "BaseBdev1", 00:11:26.910 "aliases": [ 00:11:26.910 "eb3ed9ae-4e99-4ce6-a5c2-87d70b7dd184" 00:11:26.910 ], 00:11:26.910 "product_name": "Malloc disk", 00:11:26.910 "block_size": 512, 00:11:26.910 "num_blocks": 65536, 00:11:26.910 "uuid": "eb3ed9ae-4e99-4ce6-a5c2-87d70b7dd184", 00:11:26.910 "assigned_rate_limits": { 00:11:26.910 "rw_ios_per_sec": 0, 00:11:26.910 "rw_mbytes_per_sec": 0, 00:11:26.910 "r_mbytes_per_sec": 0, 00:11:26.910 "w_mbytes_per_sec": 0 00:11:26.910 }, 00:11:26.910 "claimed": true, 00:11:26.910 "claim_type": "exclusive_write", 00:11:26.910 "zoned": false, 00:11:26.910 "supported_io_types": { 00:11:26.910 "read": true, 00:11:26.910 "write": true, 00:11:26.910 "unmap": true, 00:11:26.910 "flush": true, 00:11:26.910 "reset": true, 00:11:26.910 "nvme_admin": false, 00:11:26.910 "nvme_io": false, 00:11:26.910 "nvme_io_md": false, 00:11:26.910 "write_zeroes": true, 00:11:26.910 "zcopy": true, 00:11:26.910 "get_zone_info": false, 00:11:26.910 "zone_management": false, 00:11:26.910 "zone_append": false, 00:11:26.910 "compare": false, 00:11:26.910 "compare_and_write": false, 00:11:26.910 "abort": true, 00:11:26.910 "seek_hole": false, 00:11:26.910 "seek_data": false, 00:11:26.910 "copy": true, 00:11:26.910 "nvme_iov_md": false 00:11:26.910 }, 00:11:26.910 "memory_domains": [ 00:11:26.910 { 00:11:26.910 "dma_device_id": "system", 00:11:26.910 "dma_device_type": 1 00:11:26.910 }, 00:11:26.910 { 00:11:26.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.910 "dma_device_type": 2 00:11:26.910 } 00:11:26.910 ], 00:11:26.910 "driver_specific": {} 00:11:26.910 } 00:11:26.910 ] 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.910 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.911 10:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.911 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.911 "name": "Existed_Raid", 00:11:26.911 "uuid": "f7ec15e3-3ac7-44fb-975a-25f4d3d27e40", 00:11:26.911 "strip_size_kb": 64, 00:11:26.911 "state": "configuring", 00:11:26.911 "raid_level": "concat", 00:11:26.911 "superblock": true, 00:11:26.911 "num_base_bdevs": 4, 00:11:26.911 "num_base_bdevs_discovered": 3, 00:11:26.911 "num_base_bdevs_operational": 4, 00:11:26.911 "base_bdevs_list": [ 00:11:26.911 { 00:11:26.911 "name": "BaseBdev1", 00:11:26.911 "uuid": "eb3ed9ae-4e99-4ce6-a5c2-87d70b7dd184", 00:11:26.911 "is_configured": true, 00:11:26.911 "data_offset": 2048, 00:11:26.911 "data_size": 63488 00:11:26.911 }, 00:11:26.911 { 00:11:26.911 "name": null, 00:11:26.911 "uuid": "25c364c3-d6ee-44bf-a7c5-4164e629e729", 00:11:26.911 "is_configured": false, 00:11:26.911 "data_offset": 0, 00:11:26.911 "data_size": 63488 00:11:26.911 }, 00:11:26.911 { 00:11:26.911 "name": "BaseBdev3", 00:11:26.911 "uuid": "4a8a386e-8729-42b4-aa40-019d027fcf5d", 00:11:26.911 "is_configured": true, 00:11:26.911 "data_offset": 2048, 00:11:26.911 "data_size": 63488 00:11:26.911 }, 00:11:26.911 { 00:11:26.911 "name": "BaseBdev4", 00:11:26.911 "uuid": "2870cd98-d75d-4dd8-a17a-706d9c0c624c", 00:11:26.911 "is_configured": true, 00:11:26.911 "data_offset": 2048, 00:11:26.911 "data_size": 63488 00:11:26.911 } 00:11:26.911 ] 00:11:26.911 }' 00:11:26.911 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.911 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.478 [2024-11-15 10:39:48.481623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.478 "name": "Existed_Raid", 00:11:27.478 "uuid": "f7ec15e3-3ac7-44fb-975a-25f4d3d27e40", 00:11:27.478 "strip_size_kb": 64, 00:11:27.478 "state": "configuring", 00:11:27.478 "raid_level": "concat", 00:11:27.478 "superblock": true, 00:11:27.478 "num_base_bdevs": 4, 00:11:27.478 "num_base_bdevs_discovered": 2, 00:11:27.478 "num_base_bdevs_operational": 4, 00:11:27.478 "base_bdevs_list": [ 00:11:27.478 { 00:11:27.478 "name": "BaseBdev1", 00:11:27.478 "uuid": "eb3ed9ae-4e99-4ce6-a5c2-87d70b7dd184", 00:11:27.478 "is_configured": true, 00:11:27.478 "data_offset": 2048, 00:11:27.478 "data_size": 63488 00:11:27.478 }, 00:11:27.478 { 00:11:27.478 "name": null, 00:11:27.478 "uuid": "25c364c3-d6ee-44bf-a7c5-4164e629e729", 00:11:27.478 "is_configured": false, 00:11:27.478 "data_offset": 0, 00:11:27.478 "data_size": 63488 00:11:27.478 }, 00:11:27.478 { 00:11:27.478 "name": null, 00:11:27.478 "uuid": "4a8a386e-8729-42b4-aa40-019d027fcf5d", 00:11:27.478 "is_configured": false, 00:11:27.478 "data_offset": 0, 00:11:27.478 "data_size": 63488 00:11:27.478 }, 00:11:27.478 { 00:11:27.478 "name": "BaseBdev4", 00:11:27.478 "uuid": "2870cd98-d75d-4dd8-a17a-706d9c0c624c", 00:11:27.478 "is_configured": true, 00:11:27.478 "data_offset": 2048, 00:11:27.478 "data_size": 63488 00:11:27.478 } 00:11:27.478 ] 00:11:27.478 }' 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.478 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.042 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.042 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.042 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.042 10:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.043 10:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.043 [2024-11-15 10:39:49.029756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.043 "name": "Existed_Raid", 00:11:28.043 "uuid": "f7ec15e3-3ac7-44fb-975a-25f4d3d27e40", 00:11:28.043 "strip_size_kb": 64, 00:11:28.043 "state": "configuring", 00:11:28.043 "raid_level": "concat", 00:11:28.043 "superblock": true, 00:11:28.043 "num_base_bdevs": 4, 00:11:28.043 "num_base_bdevs_discovered": 3, 00:11:28.043 "num_base_bdevs_operational": 4, 00:11:28.043 "base_bdevs_list": [ 00:11:28.043 { 00:11:28.043 "name": "BaseBdev1", 00:11:28.043 "uuid": "eb3ed9ae-4e99-4ce6-a5c2-87d70b7dd184", 00:11:28.043 "is_configured": true, 00:11:28.043 "data_offset": 2048, 00:11:28.043 "data_size": 63488 00:11:28.043 }, 00:11:28.043 { 00:11:28.043 "name": null, 00:11:28.043 "uuid": "25c364c3-d6ee-44bf-a7c5-4164e629e729", 00:11:28.043 "is_configured": false, 00:11:28.043 "data_offset": 0, 00:11:28.043 "data_size": 63488 00:11:28.043 }, 00:11:28.043 { 00:11:28.043 "name": "BaseBdev3", 00:11:28.043 "uuid": "4a8a386e-8729-42b4-aa40-019d027fcf5d", 00:11:28.043 "is_configured": true, 00:11:28.043 "data_offset": 2048, 00:11:28.043 "data_size": 63488 00:11:28.043 }, 00:11:28.043 { 00:11:28.043 "name": "BaseBdev4", 00:11:28.043 "uuid": "2870cd98-d75d-4dd8-a17a-706d9c0c624c", 00:11:28.043 "is_configured": true, 00:11:28.043 "data_offset": 2048, 00:11:28.043 "data_size": 63488 00:11:28.043 } 00:11:28.043 ] 00:11:28.043 }' 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.043 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.606 [2024-11-15 10:39:49.621971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.606 "name": "Existed_Raid", 00:11:28.606 "uuid": "f7ec15e3-3ac7-44fb-975a-25f4d3d27e40", 00:11:28.606 "strip_size_kb": 64, 00:11:28.606 "state": "configuring", 00:11:28.606 "raid_level": "concat", 00:11:28.606 "superblock": true, 00:11:28.606 "num_base_bdevs": 4, 00:11:28.606 "num_base_bdevs_discovered": 2, 00:11:28.606 "num_base_bdevs_operational": 4, 00:11:28.606 "base_bdevs_list": [ 00:11:28.606 { 00:11:28.606 "name": null, 00:11:28.606 "uuid": "eb3ed9ae-4e99-4ce6-a5c2-87d70b7dd184", 00:11:28.606 "is_configured": false, 00:11:28.606 "data_offset": 0, 00:11:28.606 "data_size": 63488 00:11:28.606 }, 00:11:28.606 { 00:11:28.606 "name": null, 00:11:28.606 "uuid": "25c364c3-d6ee-44bf-a7c5-4164e629e729", 00:11:28.606 "is_configured": false, 00:11:28.606 "data_offset": 0, 00:11:28.606 "data_size": 63488 00:11:28.606 }, 00:11:28.606 { 00:11:28.606 "name": "BaseBdev3", 00:11:28.606 "uuid": "4a8a386e-8729-42b4-aa40-019d027fcf5d", 00:11:28.606 "is_configured": true, 00:11:28.606 "data_offset": 2048, 00:11:28.606 "data_size": 63488 00:11:28.606 }, 00:11:28.606 { 00:11:28.606 "name": "BaseBdev4", 00:11:28.606 "uuid": "2870cd98-d75d-4dd8-a17a-706d9c0c624c", 00:11:28.606 "is_configured": true, 00:11:28.606 "data_offset": 2048, 00:11:28.606 "data_size": 63488 00:11:28.606 } 00:11:28.606 ] 00:11:28.606 }' 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.606 10:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.170 [2024-11-15 10:39:50.307878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.170 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.428 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.428 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.428 "name": "Existed_Raid", 00:11:29.428 "uuid": "f7ec15e3-3ac7-44fb-975a-25f4d3d27e40", 00:11:29.428 "strip_size_kb": 64, 00:11:29.428 "state": "configuring", 00:11:29.428 "raid_level": "concat", 00:11:29.428 "superblock": true, 00:11:29.428 "num_base_bdevs": 4, 00:11:29.428 "num_base_bdevs_discovered": 3, 00:11:29.428 "num_base_bdevs_operational": 4, 00:11:29.428 "base_bdevs_list": [ 00:11:29.428 { 00:11:29.428 "name": null, 00:11:29.428 "uuid": "eb3ed9ae-4e99-4ce6-a5c2-87d70b7dd184", 00:11:29.428 "is_configured": false, 00:11:29.428 "data_offset": 0, 00:11:29.428 "data_size": 63488 00:11:29.428 }, 00:11:29.428 { 00:11:29.428 "name": "BaseBdev2", 00:11:29.428 "uuid": "25c364c3-d6ee-44bf-a7c5-4164e629e729", 00:11:29.428 "is_configured": true, 00:11:29.428 "data_offset": 2048, 00:11:29.428 "data_size": 63488 00:11:29.428 }, 00:11:29.428 { 00:11:29.428 "name": "BaseBdev3", 00:11:29.428 "uuid": "4a8a386e-8729-42b4-aa40-019d027fcf5d", 00:11:29.428 "is_configured": true, 00:11:29.428 "data_offset": 2048, 00:11:29.428 "data_size": 63488 00:11:29.428 }, 00:11:29.428 { 00:11:29.428 "name": "BaseBdev4", 00:11:29.428 "uuid": "2870cd98-d75d-4dd8-a17a-706d9c0c624c", 00:11:29.428 "is_configured": true, 00:11:29.428 "data_offset": 2048, 00:11:29.428 "data_size": 63488 00:11:29.428 } 00:11:29.428 ] 00:11:29.428 }' 00:11:29.428 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.428 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.687 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.687 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.687 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.687 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.687 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.945 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:29.945 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.945 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.945 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.945 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:29.945 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.945 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u eb3ed9ae-4e99-4ce6-a5c2-87d70b7dd184 00:11:29.945 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.945 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.945 [2024-11-15 10:39:50.950096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:29.945 [2024-11-15 10:39:50.950423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:29.945 [2024-11-15 10:39:50.950442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:29.945 NewBaseBdev 00:11:29.945 [2024-11-15 10:39:50.950876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:29.945 [2024-11-15 10:39:50.951142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:29.945 [2024-11-15 10:39:50.951185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:29.945 [2024-11-15 10:39:50.951410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.945 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.945 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:29.945 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:29.945 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.945 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.946 [ 00:11:29.946 { 00:11:29.946 "name": "NewBaseBdev", 00:11:29.946 "aliases": [ 00:11:29.946 "eb3ed9ae-4e99-4ce6-a5c2-87d70b7dd184" 00:11:29.946 ], 00:11:29.946 "product_name": "Malloc disk", 00:11:29.946 "block_size": 512, 00:11:29.946 "num_blocks": 65536, 00:11:29.946 "uuid": "eb3ed9ae-4e99-4ce6-a5c2-87d70b7dd184", 00:11:29.946 "assigned_rate_limits": { 00:11:29.946 "rw_ios_per_sec": 0, 00:11:29.946 "rw_mbytes_per_sec": 0, 00:11:29.946 "r_mbytes_per_sec": 0, 00:11:29.946 "w_mbytes_per_sec": 0 00:11:29.946 }, 00:11:29.946 "claimed": true, 00:11:29.946 "claim_type": "exclusive_write", 00:11:29.946 "zoned": false, 00:11:29.946 "supported_io_types": { 00:11:29.946 "read": true, 00:11:29.946 "write": true, 00:11:29.946 "unmap": true, 00:11:29.946 "flush": true, 00:11:29.946 "reset": true, 00:11:29.946 "nvme_admin": false, 00:11:29.946 "nvme_io": false, 00:11:29.946 "nvme_io_md": false, 00:11:29.946 "write_zeroes": true, 00:11:29.946 "zcopy": true, 00:11:29.946 "get_zone_info": false, 00:11:29.946 "zone_management": false, 00:11:29.946 "zone_append": false, 00:11:29.946 "compare": false, 00:11:29.946 "compare_and_write": false, 00:11:29.946 "abort": true, 00:11:29.946 "seek_hole": false, 00:11:29.946 "seek_data": false, 00:11:29.946 "copy": true, 00:11:29.946 "nvme_iov_md": false 00:11:29.946 }, 00:11:29.946 "memory_domains": [ 00:11:29.946 { 00:11:29.946 "dma_device_id": "system", 00:11:29.946 "dma_device_type": 1 00:11:29.946 }, 00:11:29.946 { 00:11:29.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.946 "dma_device_type": 2 00:11:29.946 } 00:11:29.946 ], 00:11:29.946 "driver_specific": {} 00:11:29.946 } 00:11:29.946 ] 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.946 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.946 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.946 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.946 "name": "Existed_Raid", 00:11:29.946 "uuid": "f7ec15e3-3ac7-44fb-975a-25f4d3d27e40", 00:11:29.946 "strip_size_kb": 64, 00:11:29.946 "state": "online", 00:11:29.946 "raid_level": "concat", 00:11:29.946 "superblock": true, 00:11:29.946 "num_base_bdevs": 4, 00:11:29.946 "num_base_bdevs_discovered": 4, 00:11:29.946 "num_base_bdevs_operational": 4, 00:11:29.946 "base_bdevs_list": [ 00:11:29.946 { 00:11:29.946 "name": "NewBaseBdev", 00:11:29.946 "uuid": "eb3ed9ae-4e99-4ce6-a5c2-87d70b7dd184", 00:11:29.946 "is_configured": true, 00:11:29.946 "data_offset": 2048, 00:11:29.946 "data_size": 63488 00:11:29.946 }, 00:11:29.946 { 00:11:29.946 "name": "BaseBdev2", 00:11:29.946 "uuid": "25c364c3-d6ee-44bf-a7c5-4164e629e729", 00:11:29.946 "is_configured": true, 00:11:29.946 "data_offset": 2048, 00:11:29.946 "data_size": 63488 00:11:29.946 }, 00:11:29.946 { 00:11:29.946 "name": "BaseBdev3", 00:11:29.946 "uuid": "4a8a386e-8729-42b4-aa40-019d027fcf5d", 00:11:29.946 "is_configured": true, 00:11:29.946 "data_offset": 2048, 00:11:29.946 "data_size": 63488 00:11:29.946 }, 00:11:29.946 { 00:11:29.946 "name": "BaseBdev4", 00:11:29.946 "uuid": "2870cd98-d75d-4dd8-a17a-706d9c0c624c", 00:11:29.946 "is_configured": true, 00:11:29.946 "data_offset": 2048, 00:11:29.946 "data_size": 63488 00:11:29.946 } 00:11:29.946 ] 00:11:29.946 }' 00:11:29.946 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.946 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.512 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.512 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.512 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.512 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.512 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.512 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.512 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.512 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.512 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.512 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.512 [2024-11-15 10:39:51.526874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.512 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.512 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.512 "name": "Existed_Raid", 00:11:30.512 "aliases": [ 00:11:30.512 "f7ec15e3-3ac7-44fb-975a-25f4d3d27e40" 00:11:30.512 ], 00:11:30.512 "product_name": "Raid Volume", 00:11:30.512 "block_size": 512, 00:11:30.512 "num_blocks": 253952, 00:11:30.512 "uuid": "f7ec15e3-3ac7-44fb-975a-25f4d3d27e40", 00:11:30.512 "assigned_rate_limits": { 00:11:30.512 "rw_ios_per_sec": 0, 00:11:30.512 "rw_mbytes_per_sec": 0, 00:11:30.512 "r_mbytes_per_sec": 0, 00:11:30.512 "w_mbytes_per_sec": 0 00:11:30.512 }, 00:11:30.512 "claimed": false, 00:11:30.512 "zoned": false, 00:11:30.512 "supported_io_types": { 00:11:30.512 "read": true, 00:11:30.512 "write": true, 00:11:30.512 "unmap": true, 00:11:30.513 "flush": true, 00:11:30.513 "reset": true, 00:11:30.513 "nvme_admin": false, 00:11:30.513 "nvme_io": false, 00:11:30.513 "nvme_io_md": false, 00:11:30.513 "write_zeroes": true, 00:11:30.513 "zcopy": false, 00:11:30.513 "get_zone_info": false, 00:11:30.513 "zone_management": false, 00:11:30.513 "zone_append": false, 00:11:30.513 "compare": false, 00:11:30.513 "compare_and_write": false, 00:11:30.513 "abort": false, 00:11:30.513 "seek_hole": false, 00:11:30.513 "seek_data": false, 00:11:30.513 "copy": false, 00:11:30.513 "nvme_iov_md": false 00:11:30.513 }, 00:11:30.513 "memory_domains": [ 00:11:30.513 { 00:11:30.513 "dma_device_id": "system", 00:11:30.513 "dma_device_type": 1 00:11:30.513 }, 00:11:30.513 { 00:11:30.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.513 "dma_device_type": 2 00:11:30.513 }, 00:11:30.513 { 00:11:30.513 "dma_device_id": "system", 00:11:30.513 "dma_device_type": 1 00:11:30.513 }, 00:11:30.513 { 00:11:30.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.513 "dma_device_type": 2 00:11:30.513 }, 00:11:30.513 { 00:11:30.513 "dma_device_id": "system", 00:11:30.513 "dma_device_type": 1 00:11:30.513 }, 00:11:30.513 { 00:11:30.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.513 "dma_device_type": 2 00:11:30.513 }, 00:11:30.513 { 00:11:30.513 "dma_device_id": "system", 00:11:30.513 "dma_device_type": 1 00:11:30.513 }, 00:11:30.513 { 00:11:30.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.513 "dma_device_type": 2 00:11:30.513 } 00:11:30.513 ], 00:11:30.513 "driver_specific": { 00:11:30.513 "raid": { 00:11:30.513 "uuid": "f7ec15e3-3ac7-44fb-975a-25f4d3d27e40", 00:11:30.513 "strip_size_kb": 64, 00:11:30.513 "state": "online", 00:11:30.513 "raid_level": "concat", 00:11:30.513 "superblock": true, 00:11:30.513 "num_base_bdevs": 4, 00:11:30.513 "num_base_bdevs_discovered": 4, 00:11:30.513 "num_base_bdevs_operational": 4, 00:11:30.513 "base_bdevs_list": [ 00:11:30.513 { 00:11:30.513 "name": "NewBaseBdev", 00:11:30.513 "uuid": "eb3ed9ae-4e99-4ce6-a5c2-87d70b7dd184", 00:11:30.513 "is_configured": true, 00:11:30.513 "data_offset": 2048, 00:11:30.513 "data_size": 63488 00:11:30.513 }, 00:11:30.513 { 00:11:30.513 "name": "BaseBdev2", 00:11:30.513 "uuid": "25c364c3-d6ee-44bf-a7c5-4164e629e729", 00:11:30.513 "is_configured": true, 00:11:30.513 "data_offset": 2048, 00:11:30.513 "data_size": 63488 00:11:30.513 }, 00:11:30.513 { 00:11:30.513 "name": "BaseBdev3", 00:11:30.513 "uuid": "4a8a386e-8729-42b4-aa40-019d027fcf5d", 00:11:30.513 "is_configured": true, 00:11:30.513 "data_offset": 2048, 00:11:30.513 "data_size": 63488 00:11:30.513 }, 00:11:30.513 { 00:11:30.513 "name": "BaseBdev4", 00:11:30.513 "uuid": "2870cd98-d75d-4dd8-a17a-706d9c0c624c", 00:11:30.513 "is_configured": true, 00:11:30.513 "data_offset": 2048, 00:11:30.513 "data_size": 63488 00:11:30.513 } 00:11:30.513 ] 00:11:30.513 } 00:11:30.513 } 00:11:30.513 }' 00:11:30.513 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.513 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:30.513 BaseBdev2 00:11:30.513 BaseBdev3 00:11:30.513 BaseBdev4' 00:11:30.513 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.513 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.513 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.513 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:30.513 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.513 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.513 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.772 [2024-11-15 10:39:51.878419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.772 [2024-11-15 10:39:51.878459] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.772 [2024-11-15 10:39:51.878585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.772 [2024-11-15 10:39:51.878680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.772 [2024-11-15 10:39:51.878698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72056 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72056 ']' 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72056 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72056 00:11:30.772 killing process with pid 72056 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72056' 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72056 00:11:30.772 [2024-11-15 10:39:51.913792] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.772 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72056 00:11:31.339 [2024-11-15 10:39:52.270744] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.274 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:32.274 00:11:32.274 real 0m12.602s 00:11:32.274 user 0m20.894s 00:11:32.274 sys 0m1.704s 00:11:32.274 ************************************ 00:11:32.274 END TEST raid_state_function_test_sb 00:11:32.274 ************************************ 00:11:32.274 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.274 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.274 10:39:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:32.274 10:39:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:32.274 10:39:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.274 10:39:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.274 ************************************ 00:11:32.274 START TEST raid_superblock_test 00:11:32.274 ************************************ 00:11:32.274 10:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:32.274 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:32.274 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:32.274 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:32.274 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:32.274 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:32.274 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:32.274 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:32.274 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:32.274 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:32.274 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:32.274 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:32.274 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:32.275 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:32.275 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:32.275 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:32.275 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:32.275 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72734 00:11:32.275 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:32.275 10:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72734 00:11:32.275 10:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72734 ']' 00:11:32.275 10:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.275 10:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.275 10:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.275 10:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.275 10:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.533 [2024-11-15 10:39:53.469352] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:11:32.533 [2024-11-15 10:39:53.469560] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72734 ] 00:11:32.533 [2024-11-15 10:39:53.648991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.792 [2024-11-15 10:39:53.778920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.078 [2024-11-15 10:39:53.983592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.078 [2024-11-15 10:39:53.983640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.646 malloc1 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.646 [2024-11-15 10:39:54.568704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:33.646 [2024-11-15 10:39:54.569022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.646 [2024-11-15 10:39:54.569180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:33.646 [2024-11-15 10:39:54.569303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.646 [2024-11-15 10:39:54.572268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.646 [2024-11-15 10:39:54.572434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:33.646 pt1 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:33.646 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.647 malloc2 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.647 [2024-11-15 10:39:54.625403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:33.647 [2024-11-15 10:39:54.625483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.647 [2024-11-15 10:39:54.625540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:33.647 [2024-11-15 10:39:54.625556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.647 [2024-11-15 10:39:54.628309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.647 [2024-11-15 10:39:54.628356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:33.647 pt2 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.647 malloc3 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.647 [2024-11-15 10:39:54.693021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:33.647 [2024-11-15 10:39:54.693220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.647 [2024-11-15 10:39:54.693268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:33.647 [2024-11-15 10:39:54.693286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.647 [2024-11-15 10:39:54.696045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.647 [2024-11-15 10:39:54.696091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:33.647 pt3 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.647 malloc4 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.647 [2024-11-15 10:39:54.749148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:33.647 [2024-11-15 10:39:54.749223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.647 [2024-11-15 10:39:54.749258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:33.647 [2024-11-15 10:39:54.749274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.647 [2024-11-15 10:39:54.752091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.647 [2024-11-15 10:39:54.752137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:33.647 pt4 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.647 [2024-11-15 10:39:54.761150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:33.647 [2024-11-15 10:39:54.763552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:33.647 [2024-11-15 10:39:54.763646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:33.647 [2024-11-15 10:39:54.763743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:33.647 [2024-11-15 10:39:54.763993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:33.647 [2024-11-15 10:39:54.764011] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:33.647 [2024-11-15 10:39:54.764363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:33.647 [2024-11-15 10:39:54.764606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:33.647 [2024-11-15 10:39:54.764629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:33.647 [2024-11-15 10:39:54.764840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.647 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.906 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.906 "name": "raid_bdev1", 00:11:33.906 "uuid": "ba26fc2f-5371-4d1b-8f92-e85724bb4445", 00:11:33.906 "strip_size_kb": 64, 00:11:33.906 "state": "online", 00:11:33.906 "raid_level": "concat", 00:11:33.906 "superblock": true, 00:11:33.906 "num_base_bdevs": 4, 00:11:33.906 "num_base_bdevs_discovered": 4, 00:11:33.906 "num_base_bdevs_operational": 4, 00:11:33.906 "base_bdevs_list": [ 00:11:33.906 { 00:11:33.906 "name": "pt1", 00:11:33.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.906 "is_configured": true, 00:11:33.906 "data_offset": 2048, 00:11:33.906 "data_size": 63488 00:11:33.906 }, 00:11:33.906 { 00:11:33.906 "name": "pt2", 00:11:33.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.906 "is_configured": true, 00:11:33.906 "data_offset": 2048, 00:11:33.906 "data_size": 63488 00:11:33.906 }, 00:11:33.906 { 00:11:33.906 "name": "pt3", 00:11:33.906 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.906 "is_configured": true, 00:11:33.906 "data_offset": 2048, 00:11:33.906 "data_size": 63488 00:11:33.906 }, 00:11:33.906 { 00:11:33.906 "name": "pt4", 00:11:33.906 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:33.906 "is_configured": true, 00:11:33.906 "data_offset": 2048, 00:11:33.907 "data_size": 63488 00:11:33.907 } 00:11:33.907 ] 00:11:33.907 }' 00:11:33.907 10:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.907 10:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.165 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:34.165 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:34.165 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.165 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.165 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.165 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.165 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.165 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.165 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.165 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.165 [2024-11-15 10:39:55.277670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.165 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.165 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.165 "name": "raid_bdev1", 00:11:34.165 "aliases": [ 00:11:34.165 "ba26fc2f-5371-4d1b-8f92-e85724bb4445" 00:11:34.165 ], 00:11:34.165 "product_name": "Raid Volume", 00:11:34.165 "block_size": 512, 00:11:34.165 "num_blocks": 253952, 00:11:34.165 "uuid": "ba26fc2f-5371-4d1b-8f92-e85724bb4445", 00:11:34.165 "assigned_rate_limits": { 00:11:34.165 "rw_ios_per_sec": 0, 00:11:34.165 "rw_mbytes_per_sec": 0, 00:11:34.165 "r_mbytes_per_sec": 0, 00:11:34.165 "w_mbytes_per_sec": 0 00:11:34.165 }, 00:11:34.165 "claimed": false, 00:11:34.165 "zoned": false, 00:11:34.165 "supported_io_types": { 00:11:34.165 "read": true, 00:11:34.165 "write": true, 00:11:34.165 "unmap": true, 00:11:34.165 "flush": true, 00:11:34.165 "reset": true, 00:11:34.165 "nvme_admin": false, 00:11:34.165 "nvme_io": false, 00:11:34.165 "nvme_io_md": false, 00:11:34.165 "write_zeroes": true, 00:11:34.165 "zcopy": false, 00:11:34.165 "get_zone_info": false, 00:11:34.165 "zone_management": false, 00:11:34.165 "zone_append": false, 00:11:34.165 "compare": false, 00:11:34.165 "compare_and_write": false, 00:11:34.165 "abort": false, 00:11:34.165 "seek_hole": false, 00:11:34.165 "seek_data": false, 00:11:34.165 "copy": false, 00:11:34.165 "nvme_iov_md": false 00:11:34.165 }, 00:11:34.165 "memory_domains": [ 00:11:34.165 { 00:11:34.165 "dma_device_id": "system", 00:11:34.165 "dma_device_type": 1 00:11:34.165 }, 00:11:34.165 { 00:11:34.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.165 "dma_device_type": 2 00:11:34.165 }, 00:11:34.165 { 00:11:34.165 "dma_device_id": "system", 00:11:34.165 "dma_device_type": 1 00:11:34.165 }, 00:11:34.165 { 00:11:34.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.165 "dma_device_type": 2 00:11:34.165 }, 00:11:34.165 { 00:11:34.165 "dma_device_id": "system", 00:11:34.165 "dma_device_type": 1 00:11:34.165 }, 00:11:34.165 { 00:11:34.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.165 "dma_device_type": 2 00:11:34.165 }, 00:11:34.165 { 00:11:34.165 "dma_device_id": "system", 00:11:34.165 "dma_device_type": 1 00:11:34.165 }, 00:11:34.165 { 00:11:34.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.165 "dma_device_type": 2 00:11:34.165 } 00:11:34.165 ], 00:11:34.165 "driver_specific": { 00:11:34.165 "raid": { 00:11:34.165 "uuid": "ba26fc2f-5371-4d1b-8f92-e85724bb4445", 00:11:34.165 "strip_size_kb": 64, 00:11:34.165 "state": "online", 00:11:34.165 "raid_level": "concat", 00:11:34.165 "superblock": true, 00:11:34.165 "num_base_bdevs": 4, 00:11:34.165 "num_base_bdevs_discovered": 4, 00:11:34.165 "num_base_bdevs_operational": 4, 00:11:34.165 "base_bdevs_list": [ 00:11:34.165 { 00:11:34.165 "name": "pt1", 00:11:34.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.165 "is_configured": true, 00:11:34.165 "data_offset": 2048, 00:11:34.165 "data_size": 63488 00:11:34.165 }, 00:11:34.165 { 00:11:34.165 "name": "pt2", 00:11:34.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.165 "is_configured": true, 00:11:34.165 "data_offset": 2048, 00:11:34.165 "data_size": 63488 00:11:34.165 }, 00:11:34.165 { 00:11:34.165 "name": "pt3", 00:11:34.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.165 "is_configured": true, 00:11:34.165 "data_offset": 2048, 00:11:34.165 "data_size": 63488 00:11:34.165 }, 00:11:34.165 { 00:11:34.165 "name": "pt4", 00:11:34.165 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:34.165 "is_configured": true, 00:11:34.165 "data_offset": 2048, 00:11:34.165 "data_size": 63488 00:11:34.165 } 00:11:34.165 ] 00:11:34.165 } 00:11:34.165 } 00:11:34.165 }' 00:11:34.165 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.492 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:34.492 pt2 00:11:34.492 pt3 00:11:34.492 pt4' 00:11:34.492 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.492 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.493 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:34.493 [2024-11-15 10:39:55.637673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.749 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ba26fc2f-5371-4d1b-8f92-e85724bb4445 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ba26fc2f-5371-4d1b-8f92-e85724bb4445 ']' 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.750 [2024-11-15 10:39:55.689325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.750 [2024-11-15 10:39:55.689356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.750 [2024-11-15 10:39:55.689446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.750 [2024-11-15 10:39:55.689551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.750 [2024-11-15 10:39:55.689576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.750 [2024-11-15 10:39:55.857376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:34.750 [2024-11-15 10:39:55.859887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:34.750 [2024-11-15 10:39:55.860070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:34.750 [2024-11-15 10:39:55.860244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:34.750 [2024-11-15 10:39:55.860427] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:34.750 [2024-11-15 10:39:55.860645] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:34.750 [2024-11-15 10:39:55.860870] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:34.750 [2024-11-15 10:39:55.861074] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:34.750 [2024-11-15 10:39:55.861225] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.750 [2024-11-15 10:39:55.861336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:34.750 request: 00:11:34.750 { 00:11:34.750 "name": "raid_bdev1", 00:11:34.750 "raid_level": "concat", 00:11:34.750 "base_bdevs": [ 00:11:34.750 "malloc1", 00:11:34.750 "malloc2", 00:11:34.750 "malloc3", 00:11:34.750 "malloc4" 00:11:34.750 ], 00:11:34.750 "strip_size_kb": 64, 00:11:34.750 "superblock": false, 00:11:34.750 "method": "bdev_raid_create", 00:11:34.750 "req_id": 1 00:11:34.750 } 00:11:34.750 Got JSON-RPC error response 00:11:34.750 response: 00:11:34.750 { 00:11:34.750 "code": -17, 00:11:34.750 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:34.750 } 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.750 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.007 [2024-11-15 10:39:55.929665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:35.007 [2024-11-15 10:39:55.929727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.007 [2024-11-15 10:39:55.929754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:35.007 [2024-11-15 10:39:55.929772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.007 [2024-11-15 10:39:55.932505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.007 [2024-11-15 10:39:55.932554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:35.007 [2024-11-15 10:39:55.932640] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:35.007 [2024-11-15 10:39:55.932717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:35.007 pt1 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.007 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.007 "name": "raid_bdev1", 00:11:35.007 "uuid": "ba26fc2f-5371-4d1b-8f92-e85724bb4445", 00:11:35.007 "strip_size_kb": 64, 00:11:35.007 "state": "configuring", 00:11:35.007 "raid_level": "concat", 00:11:35.007 "superblock": true, 00:11:35.007 "num_base_bdevs": 4, 00:11:35.007 "num_base_bdevs_discovered": 1, 00:11:35.007 "num_base_bdevs_operational": 4, 00:11:35.007 "base_bdevs_list": [ 00:11:35.007 { 00:11:35.007 "name": "pt1", 00:11:35.007 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.007 "is_configured": true, 00:11:35.007 "data_offset": 2048, 00:11:35.007 "data_size": 63488 00:11:35.007 }, 00:11:35.007 { 00:11:35.007 "name": null, 00:11:35.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.007 "is_configured": false, 00:11:35.007 "data_offset": 2048, 00:11:35.007 "data_size": 63488 00:11:35.007 }, 00:11:35.007 { 00:11:35.007 "name": null, 00:11:35.007 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.007 "is_configured": false, 00:11:35.007 "data_offset": 2048, 00:11:35.007 "data_size": 63488 00:11:35.007 }, 00:11:35.007 { 00:11:35.007 "name": null, 00:11:35.007 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.007 "is_configured": false, 00:11:35.007 "data_offset": 2048, 00:11:35.007 "data_size": 63488 00:11:35.007 } 00:11:35.007 ] 00:11:35.007 }' 00:11:35.008 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.008 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.577 [2024-11-15 10:39:56.485860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.577 [2024-11-15 10:39:56.485950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.577 [2024-11-15 10:39:56.485980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:35.577 [2024-11-15 10:39:56.485999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.577 [2024-11-15 10:39:56.486565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.577 [2024-11-15 10:39:56.486601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.577 [2024-11-15 10:39:56.486705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:35.577 [2024-11-15 10:39:56.486752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.577 pt2 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.577 [2024-11-15 10:39:56.493855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.577 "name": "raid_bdev1", 00:11:35.577 "uuid": "ba26fc2f-5371-4d1b-8f92-e85724bb4445", 00:11:35.577 "strip_size_kb": 64, 00:11:35.577 "state": "configuring", 00:11:35.577 "raid_level": "concat", 00:11:35.577 "superblock": true, 00:11:35.577 "num_base_bdevs": 4, 00:11:35.577 "num_base_bdevs_discovered": 1, 00:11:35.577 "num_base_bdevs_operational": 4, 00:11:35.577 "base_bdevs_list": [ 00:11:35.577 { 00:11:35.577 "name": "pt1", 00:11:35.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.577 "is_configured": true, 00:11:35.577 "data_offset": 2048, 00:11:35.577 "data_size": 63488 00:11:35.577 }, 00:11:35.577 { 00:11:35.577 "name": null, 00:11:35.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.577 "is_configured": false, 00:11:35.577 "data_offset": 0, 00:11:35.577 "data_size": 63488 00:11:35.577 }, 00:11:35.577 { 00:11:35.577 "name": null, 00:11:35.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.577 "is_configured": false, 00:11:35.577 "data_offset": 2048, 00:11:35.577 "data_size": 63488 00:11:35.577 }, 00:11:35.577 { 00:11:35.577 "name": null, 00:11:35.577 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.577 "is_configured": false, 00:11:35.577 "data_offset": 2048, 00:11:35.577 "data_size": 63488 00:11:35.577 } 00:11:35.577 ] 00:11:35.577 }' 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.577 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.143 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:36.143 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.144 [2024-11-15 10:39:57.037982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:36.144 [2024-11-15 10:39:57.038058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.144 [2024-11-15 10:39:57.038091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:36.144 [2024-11-15 10:39:57.038107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.144 [2024-11-15 10:39:57.038670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.144 [2024-11-15 10:39:57.038695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:36.144 [2024-11-15 10:39:57.038800] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:36.144 [2024-11-15 10:39:57.038830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:36.144 pt2 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.144 [2024-11-15 10:39:57.049951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:36.144 [2024-11-15 10:39:57.050009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.144 [2024-11-15 10:39:57.050044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:36.144 [2024-11-15 10:39:57.050061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.144 [2024-11-15 10:39:57.050515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.144 [2024-11-15 10:39:57.050552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:36.144 [2024-11-15 10:39:57.050635] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:36.144 [2024-11-15 10:39:57.050662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:36.144 pt3 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.144 [2024-11-15 10:39:57.057926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:36.144 [2024-11-15 10:39:57.057984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.144 [2024-11-15 10:39:57.058024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:36.144 [2024-11-15 10:39:57.058039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.144 [2024-11-15 10:39:57.058483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.144 [2024-11-15 10:39:57.058540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:36.144 [2024-11-15 10:39:57.058632] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:36.144 [2024-11-15 10:39:57.058660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:36.144 [2024-11-15 10:39:57.058821] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:36.144 [2024-11-15 10:39:57.058836] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:36.144 [2024-11-15 10:39:57.059132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:36.144 [2024-11-15 10:39:57.059329] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:36.144 [2024-11-15 10:39:57.059351] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:36.144 [2024-11-15 10:39:57.059529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.144 pt4 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.144 "name": "raid_bdev1", 00:11:36.144 "uuid": "ba26fc2f-5371-4d1b-8f92-e85724bb4445", 00:11:36.144 "strip_size_kb": 64, 00:11:36.144 "state": "online", 00:11:36.144 "raid_level": "concat", 00:11:36.144 "superblock": true, 00:11:36.144 "num_base_bdevs": 4, 00:11:36.144 "num_base_bdevs_discovered": 4, 00:11:36.144 "num_base_bdevs_operational": 4, 00:11:36.144 "base_bdevs_list": [ 00:11:36.144 { 00:11:36.144 "name": "pt1", 00:11:36.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.144 "is_configured": true, 00:11:36.144 "data_offset": 2048, 00:11:36.144 "data_size": 63488 00:11:36.144 }, 00:11:36.144 { 00:11:36.144 "name": "pt2", 00:11:36.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.144 "is_configured": true, 00:11:36.144 "data_offset": 2048, 00:11:36.144 "data_size": 63488 00:11:36.144 }, 00:11:36.144 { 00:11:36.144 "name": "pt3", 00:11:36.144 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.144 "is_configured": true, 00:11:36.144 "data_offset": 2048, 00:11:36.144 "data_size": 63488 00:11:36.144 }, 00:11:36.144 { 00:11:36.144 "name": "pt4", 00:11:36.144 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:36.144 "is_configured": true, 00:11:36.144 "data_offset": 2048, 00:11:36.144 "data_size": 63488 00:11:36.144 } 00:11:36.144 ] 00:11:36.144 }' 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.144 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.712 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:36.712 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:36.712 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.712 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.712 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.712 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.712 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.712 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.712 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.712 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.712 [2024-11-15 10:39:57.586560] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.712 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.712 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.712 "name": "raid_bdev1", 00:11:36.712 "aliases": [ 00:11:36.712 "ba26fc2f-5371-4d1b-8f92-e85724bb4445" 00:11:36.712 ], 00:11:36.712 "product_name": "Raid Volume", 00:11:36.712 "block_size": 512, 00:11:36.712 "num_blocks": 253952, 00:11:36.712 "uuid": "ba26fc2f-5371-4d1b-8f92-e85724bb4445", 00:11:36.712 "assigned_rate_limits": { 00:11:36.712 "rw_ios_per_sec": 0, 00:11:36.712 "rw_mbytes_per_sec": 0, 00:11:36.712 "r_mbytes_per_sec": 0, 00:11:36.712 "w_mbytes_per_sec": 0 00:11:36.712 }, 00:11:36.712 "claimed": false, 00:11:36.712 "zoned": false, 00:11:36.712 "supported_io_types": { 00:11:36.712 "read": true, 00:11:36.712 "write": true, 00:11:36.712 "unmap": true, 00:11:36.712 "flush": true, 00:11:36.712 "reset": true, 00:11:36.712 "nvme_admin": false, 00:11:36.712 "nvme_io": false, 00:11:36.712 "nvme_io_md": false, 00:11:36.712 "write_zeroes": true, 00:11:36.712 "zcopy": false, 00:11:36.712 "get_zone_info": false, 00:11:36.712 "zone_management": false, 00:11:36.712 "zone_append": false, 00:11:36.712 "compare": false, 00:11:36.712 "compare_and_write": false, 00:11:36.712 "abort": false, 00:11:36.712 "seek_hole": false, 00:11:36.712 "seek_data": false, 00:11:36.712 "copy": false, 00:11:36.712 "nvme_iov_md": false 00:11:36.712 }, 00:11:36.712 "memory_domains": [ 00:11:36.712 { 00:11:36.712 "dma_device_id": "system", 00:11:36.712 "dma_device_type": 1 00:11:36.712 }, 00:11:36.712 { 00:11:36.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.712 "dma_device_type": 2 00:11:36.712 }, 00:11:36.712 { 00:11:36.712 "dma_device_id": "system", 00:11:36.712 "dma_device_type": 1 00:11:36.712 }, 00:11:36.712 { 00:11:36.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.712 "dma_device_type": 2 00:11:36.712 }, 00:11:36.712 { 00:11:36.712 "dma_device_id": "system", 00:11:36.712 "dma_device_type": 1 00:11:36.712 }, 00:11:36.713 { 00:11:36.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.713 "dma_device_type": 2 00:11:36.713 }, 00:11:36.713 { 00:11:36.713 "dma_device_id": "system", 00:11:36.713 "dma_device_type": 1 00:11:36.713 }, 00:11:36.713 { 00:11:36.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.713 "dma_device_type": 2 00:11:36.713 } 00:11:36.713 ], 00:11:36.713 "driver_specific": { 00:11:36.713 "raid": { 00:11:36.713 "uuid": "ba26fc2f-5371-4d1b-8f92-e85724bb4445", 00:11:36.713 "strip_size_kb": 64, 00:11:36.713 "state": "online", 00:11:36.713 "raid_level": "concat", 00:11:36.713 "superblock": true, 00:11:36.713 "num_base_bdevs": 4, 00:11:36.713 "num_base_bdevs_discovered": 4, 00:11:36.713 "num_base_bdevs_operational": 4, 00:11:36.713 "base_bdevs_list": [ 00:11:36.713 { 00:11:36.713 "name": "pt1", 00:11:36.713 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.713 "is_configured": true, 00:11:36.713 "data_offset": 2048, 00:11:36.713 "data_size": 63488 00:11:36.713 }, 00:11:36.713 { 00:11:36.713 "name": "pt2", 00:11:36.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.713 "is_configured": true, 00:11:36.713 "data_offset": 2048, 00:11:36.713 "data_size": 63488 00:11:36.713 }, 00:11:36.713 { 00:11:36.713 "name": "pt3", 00:11:36.713 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.713 "is_configured": true, 00:11:36.713 "data_offset": 2048, 00:11:36.713 "data_size": 63488 00:11:36.713 }, 00:11:36.713 { 00:11:36.713 "name": "pt4", 00:11:36.713 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:36.713 "is_configured": true, 00:11:36.713 "data_offset": 2048, 00:11:36.713 "data_size": 63488 00:11:36.713 } 00:11:36.713 ] 00:11:36.713 } 00:11:36.713 } 00:11:36.713 }' 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:36.713 pt2 00:11:36.713 pt3 00:11:36.713 pt4' 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.713 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.972 [2024-11-15 10:39:57.966569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.972 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ba26fc2f-5371-4d1b-8f92-e85724bb4445 '!=' ba26fc2f-5371-4d1b-8f92-e85724bb4445 ']' 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72734 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72734 ']' 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72734 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72734 00:11:36.972 killing process with pid 72734 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72734' 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72734 00:11:36.972 [2024-11-15 10:39:58.040789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:36.972 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72734 00:11:36.972 [2024-11-15 10:39:58.040885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.972 [2024-11-15 10:39:58.040980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.972 [2024-11-15 10:39:58.040995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:37.231 [2024-11-15 10:39:58.387357] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.607 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:38.607 00:11:38.607 real 0m6.028s 00:11:38.607 user 0m9.160s 00:11:38.607 sys 0m0.844s 00:11:38.607 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.607 ************************************ 00:11:38.607 END TEST raid_superblock_test 00:11:38.607 ************************************ 00:11:38.607 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.607 10:39:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:38.607 10:39:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:38.607 10:39:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.607 10:39:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.607 ************************************ 00:11:38.607 START TEST raid_read_error_test 00:11:38.607 ************************************ 00:11:38.607 10:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:38.607 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:38.607 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:38.607 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:38.607 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:38.607 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.607 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:38.607 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.607 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.n0zaByJhrF 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72999 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72999 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72999 ']' 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.608 10:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.608 [2024-11-15 10:39:59.561605] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:11:38.608 [2024-11-15 10:39:59.561778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72999 ] 00:11:38.608 [2024-11-15 10:39:59.744351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.866 [2024-11-15 10:39:59.869866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.125 [2024-11-15 10:40:00.070843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.125 [2024-11-15 10:40:00.070882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.691 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.691 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:39.691 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.692 BaseBdev1_malloc 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.692 true 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.692 [2024-11-15 10:40:00.604635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:39.692 [2024-11-15 10:40:00.604705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.692 [2024-11-15 10:40:00.604735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:39.692 [2024-11-15 10:40:00.604767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.692 [2024-11-15 10:40:00.607561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.692 [2024-11-15 10:40:00.607620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:39.692 BaseBdev1 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.692 BaseBdev2_malloc 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.692 true 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.692 [2024-11-15 10:40:00.661005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:39.692 [2024-11-15 10:40:00.661072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.692 [2024-11-15 10:40:00.661097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:39.692 [2024-11-15 10:40:00.661114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.692 [2024-11-15 10:40:00.663816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.692 [2024-11-15 10:40:00.663864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:39.692 BaseBdev2 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.692 BaseBdev3_malloc 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.692 true 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.692 [2024-11-15 10:40:00.726019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:39.692 [2024-11-15 10:40:00.726947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.692 [2024-11-15 10:40:00.726985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:39.692 [2024-11-15 10:40:00.727003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.692 [2024-11-15 10:40:00.729772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.692 [2024-11-15 10:40:00.729823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:39.692 BaseBdev3 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.692 BaseBdev4_malloc 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.692 true 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.692 [2024-11-15 10:40:00.781620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:39.692 [2024-11-15 10:40:00.781814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.692 [2024-11-15 10:40:00.781884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:39.692 [2024-11-15 10:40:00.782064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.692 [2024-11-15 10:40:00.784865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.692 BaseBdev4 00:11:39.692 [2024-11-15 10:40:00.785030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.692 [2024-11-15 10:40:00.789743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.692 [2024-11-15 10:40:00.792151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.692 [2024-11-15 10:40:00.792378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.692 [2024-11-15 10:40:00.792525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:39.692 [2024-11-15 10:40:00.792844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:39.692 [2024-11-15 10:40:00.792868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:39.692 [2024-11-15 10:40:00.793168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:39.692 [2024-11-15 10:40:00.793375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:39.692 [2024-11-15 10:40:00.793395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:39.692 [2024-11-15 10:40:00.793641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.692 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.692 "name": "raid_bdev1", 00:11:39.692 "uuid": "ed8151a0-442c-4481-8e03-e028c23cdab2", 00:11:39.692 "strip_size_kb": 64, 00:11:39.692 "state": "online", 00:11:39.693 "raid_level": "concat", 00:11:39.693 "superblock": true, 00:11:39.693 "num_base_bdevs": 4, 00:11:39.693 "num_base_bdevs_discovered": 4, 00:11:39.693 "num_base_bdevs_operational": 4, 00:11:39.693 "base_bdevs_list": [ 00:11:39.693 { 00:11:39.693 "name": "BaseBdev1", 00:11:39.693 "uuid": "14ab5046-4e4e-5e13-90b4-62ae5619fc15", 00:11:39.693 "is_configured": true, 00:11:39.693 "data_offset": 2048, 00:11:39.693 "data_size": 63488 00:11:39.693 }, 00:11:39.693 { 00:11:39.693 "name": "BaseBdev2", 00:11:39.693 "uuid": "4d39df59-6edb-5b90-8a1a-d4fd7eea7fab", 00:11:39.693 "is_configured": true, 00:11:39.693 "data_offset": 2048, 00:11:39.693 "data_size": 63488 00:11:39.693 }, 00:11:39.693 { 00:11:39.693 "name": "BaseBdev3", 00:11:39.693 "uuid": "2d7fdb26-fa8b-5fad-a2e9-c70d284d2e23", 00:11:39.693 "is_configured": true, 00:11:39.693 "data_offset": 2048, 00:11:39.693 "data_size": 63488 00:11:39.693 }, 00:11:39.693 { 00:11:39.693 "name": "BaseBdev4", 00:11:39.693 "uuid": "c3d00cce-2b1f-5b0b-a13a-7a704a363f43", 00:11:39.693 "is_configured": true, 00:11:39.693 "data_offset": 2048, 00:11:39.693 "data_size": 63488 00:11:39.693 } 00:11:39.693 ] 00:11:39.693 }' 00:11:39.693 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.693 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.259 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:40.259 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:40.517 [2024-11-15 10:40:01.423270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.509 "name": "raid_bdev1", 00:11:41.509 "uuid": "ed8151a0-442c-4481-8e03-e028c23cdab2", 00:11:41.509 "strip_size_kb": 64, 00:11:41.509 "state": "online", 00:11:41.509 "raid_level": "concat", 00:11:41.509 "superblock": true, 00:11:41.509 "num_base_bdevs": 4, 00:11:41.509 "num_base_bdevs_discovered": 4, 00:11:41.509 "num_base_bdevs_operational": 4, 00:11:41.509 "base_bdevs_list": [ 00:11:41.509 { 00:11:41.509 "name": "BaseBdev1", 00:11:41.509 "uuid": "14ab5046-4e4e-5e13-90b4-62ae5619fc15", 00:11:41.509 "is_configured": true, 00:11:41.509 "data_offset": 2048, 00:11:41.509 "data_size": 63488 00:11:41.509 }, 00:11:41.509 { 00:11:41.509 "name": "BaseBdev2", 00:11:41.509 "uuid": "4d39df59-6edb-5b90-8a1a-d4fd7eea7fab", 00:11:41.509 "is_configured": true, 00:11:41.509 "data_offset": 2048, 00:11:41.509 "data_size": 63488 00:11:41.509 }, 00:11:41.509 { 00:11:41.509 "name": "BaseBdev3", 00:11:41.509 "uuid": "2d7fdb26-fa8b-5fad-a2e9-c70d284d2e23", 00:11:41.509 "is_configured": true, 00:11:41.509 "data_offset": 2048, 00:11:41.509 "data_size": 63488 00:11:41.509 }, 00:11:41.509 { 00:11:41.509 "name": "BaseBdev4", 00:11:41.509 "uuid": "c3d00cce-2b1f-5b0b-a13a-7a704a363f43", 00:11:41.509 "is_configured": true, 00:11:41.509 "data_offset": 2048, 00:11:41.509 "data_size": 63488 00:11:41.509 } 00:11:41.509 ] 00:11:41.509 }' 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.509 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.772 [2024-11-15 10:40:02.830690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.772 [2024-11-15 10:40:02.830861] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.772 [2024-11-15 10:40:02.834177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.772 { 00:11:41.772 "results": [ 00:11:41.772 { 00:11:41.772 "job": "raid_bdev1", 00:11:41.772 "core_mask": "0x1", 00:11:41.772 "workload": "randrw", 00:11:41.772 "percentage": 50, 00:11:41.772 "status": "finished", 00:11:41.772 "queue_depth": 1, 00:11:41.772 "io_size": 131072, 00:11:41.772 "runtime": 1.405028, 00:11:41.772 "iops": 10977.005440460973, 00:11:41.772 "mibps": 1372.1256800576216, 00:11:41.772 "io_failed": 1, 00:11:41.772 "io_timeout": 0, 00:11:41.772 "avg_latency_us": 127.08794417201058, 00:11:41.772 "min_latency_us": 41.192727272727275, 00:11:41.772 "max_latency_us": 1824.581818181818 00:11:41.772 } 00:11:41.772 ], 00:11:41.772 "core_count": 1 00:11:41.772 } 00:11:41.772 [2024-11-15 10:40:02.834378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.772 [2024-11-15 10:40:02.834453] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.772 [2024-11-15 10:40:02.834477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72999 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72999 ']' 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72999 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72999 00:11:41.772 killing process with pid 72999 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72999' 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72999 00:11:41.772 [2024-11-15 10:40:02.869796] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.772 10:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72999 00:11:42.030 [2024-11-15 10:40:03.159435] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.407 10:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.n0zaByJhrF 00:11:43.407 10:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:43.407 10:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:43.407 10:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:43.407 10:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:43.407 10:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.407 10:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:43.407 10:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:43.407 00:11:43.407 real 0m4.807s 00:11:43.407 user 0m5.922s 00:11:43.407 sys 0m0.598s 00:11:43.407 10:40:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.407 10:40:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.407 ************************************ 00:11:43.407 END TEST raid_read_error_test 00:11:43.407 ************************************ 00:11:43.407 10:40:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:43.407 10:40:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:43.407 10:40:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.407 10:40:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.407 ************************************ 00:11:43.407 START TEST raid_write_error_test 00:11:43.407 ************************************ 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.407 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UfHVjHahir 00:11:43.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73150 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73150 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73150 ']' 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.408 10:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.408 [2024-11-15 10:40:04.422470] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:11:43.408 [2024-11-15 10:40:04.422874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73150 ] 00:11:43.665 [2024-11-15 10:40:04.605343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.665 [2024-11-15 10:40:04.749417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.922 [2024-11-15 10:40:04.953782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.922 [2024-11-15 10:40:04.953872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.488 BaseBdev1_malloc 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.488 true 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.488 [2024-11-15 10:40:05.485364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:44.488 [2024-11-15 10:40:05.485580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.488 [2024-11-15 10:40:05.485655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:44.488 [2024-11-15 10:40:05.485889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.488 [2024-11-15 10:40:05.488693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.488 [2024-11-15 10:40:05.488744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:44.488 BaseBdev1 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.488 BaseBdev2_malloc 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.488 true 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.488 [2024-11-15 10:40:05.549274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:44.488 [2024-11-15 10:40:05.549468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.488 [2024-11-15 10:40:05.549553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:44.488 [2024-11-15 10:40:05.549580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.488 [2024-11-15 10:40:05.552329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.488 BaseBdev2 00:11:44.488 [2024-11-15 10:40:05.552504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.488 BaseBdev3_malloc 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.488 true 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.488 [2024-11-15 10:40:05.625156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:44.488 [2024-11-15 10:40:05.625348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.488 [2024-11-15 10:40:05.625385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:44.488 [2024-11-15 10:40:05.625403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.488 [2024-11-15 10:40:05.628235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.488 [2024-11-15 10:40:05.628286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:44.488 BaseBdev3 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.488 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.747 BaseBdev4_malloc 00:11:44.747 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.747 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:44.747 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.747 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.747 true 00:11:44.747 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.747 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.748 [2024-11-15 10:40:05.684871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:44.748 [2024-11-15 10:40:05.685064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.748 [2024-11-15 10:40:05.685133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:44.748 [2024-11-15 10:40:05.685325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.748 [2024-11-15 10:40:05.688097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.748 BaseBdev4 00:11:44.748 [2024-11-15 10:40:05.688265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.748 [2024-11-15 10:40:05.692937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:44.748 [2024-11-15 10:40:05.695422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.748 [2024-11-15 10:40:05.695662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.748 [2024-11-15 10:40:05.695917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:44.748 [2024-11-15 10:40:05.696326] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:44.748 [2024-11-15 10:40:05.696355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:44.748 [2024-11-15 10:40:05.696679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:44.748 [2024-11-15 10:40:05.696909] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:44.748 [2024-11-15 10:40:05.696928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:44.748 [2024-11-15 10:40:05.697161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.748 "name": "raid_bdev1", 00:11:44.748 "uuid": "f745a5b3-1da7-4a02-ad35-26ce7c99e960", 00:11:44.748 "strip_size_kb": 64, 00:11:44.748 "state": "online", 00:11:44.748 "raid_level": "concat", 00:11:44.748 "superblock": true, 00:11:44.748 "num_base_bdevs": 4, 00:11:44.748 "num_base_bdevs_discovered": 4, 00:11:44.748 "num_base_bdevs_operational": 4, 00:11:44.748 "base_bdevs_list": [ 00:11:44.748 { 00:11:44.748 "name": "BaseBdev1", 00:11:44.748 "uuid": "407179f2-acc5-5719-9bdf-d12a38860ea0", 00:11:44.748 "is_configured": true, 00:11:44.748 "data_offset": 2048, 00:11:44.748 "data_size": 63488 00:11:44.748 }, 00:11:44.748 { 00:11:44.748 "name": "BaseBdev2", 00:11:44.748 "uuid": "46432864-7c66-5a26-9b01-1753841ba7f1", 00:11:44.748 "is_configured": true, 00:11:44.748 "data_offset": 2048, 00:11:44.748 "data_size": 63488 00:11:44.748 }, 00:11:44.748 { 00:11:44.748 "name": "BaseBdev3", 00:11:44.748 "uuid": "0e667a36-7168-52de-bc09-7922028268b7", 00:11:44.748 "is_configured": true, 00:11:44.748 "data_offset": 2048, 00:11:44.748 "data_size": 63488 00:11:44.748 }, 00:11:44.748 { 00:11:44.748 "name": "BaseBdev4", 00:11:44.748 "uuid": "292c75a7-6501-54cb-81eb-5f49f2e6493a", 00:11:44.748 "is_configured": true, 00:11:44.748 "data_offset": 2048, 00:11:44.748 "data_size": 63488 00:11:44.748 } 00:11:44.748 ] 00:11:44.748 }' 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.748 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.315 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:45.315 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:45.315 [2024-11-15 10:40:06.314668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.251 "name": "raid_bdev1", 00:11:46.251 "uuid": "f745a5b3-1da7-4a02-ad35-26ce7c99e960", 00:11:46.251 "strip_size_kb": 64, 00:11:46.251 "state": "online", 00:11:46.251 "raid_level": "concat", 00:11:46.251 "superblock": true, 00:11:46.251 "num_base_bdevs": 4, 00:11:46.251 "num_base_bdevs_discovered": 4, 00:11:46.251 "num_base_bdevs_operational": 4, 00:11:46.251 "base_bdevs_list": [ 00:11:46.251 { 00:11:46.251 "name": "BaseBdev1", 00:11:46.251 "uuid": "407179f2-acc5-5719-9bdf-d12a38860ea0", 00:11:46.251 "is_configured": true, 00:11:46.251 "data_offset": 2048, 00:11:46.251 "data_size": 63488 00:11:46.251 }, 00:11:46.251 { 00:11:46.251 "name": "BaseBdev2", 00:11:46.251 "uuid": "46432864-7c66-5a26-9b01-1753841ba7f1", 00:11:46.251 "is_configured": true, 00:11:46.251 "data_offset": 2048, 00:11:46.251 "data_size": 63488 00:11:46.251 }, 00:11:46.251 { 00:11:46.251 "name": "BaseBdev3", 00:11:46.251 "uuid": "0e667a36-7168-52de-bc09-7922028268b7", 00:11:46.251 "is_configured": true, 00:11:46.251 "data_offset": 2048, 00:11:46.251 "data_size": 63488 00:11:46.251 }, 00:11:46.251 { 00:11:46.251 "name": "BaseBdev4", 00:11:46.251 "uuid": "292c75a7-6501-54cb-81eb-5f49f2e6493a", 00:11:46.251 "is_configured": true, 00:11:46.251 "data_offset": 2048, 00:11:46.251 "data_size": 63488 00:11:46.251 } 00:11:46.251 ] 00:11:46.251 }' 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.251 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.817 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:46.817 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.817 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.817 [2024-11-15 10:40:07.704842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.817 [2024-11-15 10:40:07.705023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.817 { 00:11:46.817 "results": [ 00:11:46.817 { 00:11:46.817 "job": "raid_bdev1", 00:11:46.817 "core_mask": "0x1", 00:11:46.817 "workload": "randrw", 00:11:46.817 "percentage": 50, 00:11:46.817 "status": "finished", 00:11:46.817 "queue_depth": 1, 00:11:46.817 "io_size": 131072, 00:11:46.817 "runtime": 1.387659, 00:11:46.817 "iops": 10885.239096925108, 00:11:46.817 "mibps": 1360.6548871156385, 00:11:46.817 "io_failed": 1, 00:11:46.817 "io_timeout": 0, 00:11:46.817 "avg_latency_us": 127.72532912870264, 00:11:46.817 "min_latency_us": 42.589090909090906, 00:11:46.817 "max_latency_us": 1854.370909090909 00:11:46.817 } 00:11:46.817 ], 00:11:46.817 "core_count": 1 00:11:46.817 } 00:11:46.817 [2024-11-15 10:40:07.708365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.817 [2024-11-15 10:40:07.708442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.817 [2024-11-15 10:40:07.708528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.817 [2024-11-15 10:40:07.708553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:46.818 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.818 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73150 00:11:46.818 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73150 ']' 00:11:46.818 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73150 00:11:46.818 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:46.818 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.818 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73150 00:11:46.818 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.818 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.818 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73150' 00:11:46.818 killing process with pid 73150 00:11:46.818 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73150 00:11:46.818 [2024-11-15 10:40:07.745471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:46.818 10:40:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73150 00:11:47.075 [2024-11-15 10:40:08.031052] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:48.010 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UfHVjHahir 00:11:48.010 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:48.010 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:48.010 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:48.010 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:48.010 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.010 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:48.010 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:48.010 00:11:48.010 real 0m4.812s 00:11:48.010 user 0m5.922s 00:11:48.010 sys 0m0.587s 00:11:48.010 10:40:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.010 ************************************ 00:11:48.010 END TEST raid_write_error_test 00:11:48.010 ************************************ 00:11:48.010 10:40:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.010 10:40:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:48.010 10:40:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:48.010 10:40:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:48.010 10:40:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.010 10:40:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:48.010 ************************************ 00:11:48.010 START TEST raid_state_function_test 00:11:48.010 ************************************ 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:48.010 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:48.269 Process raid pid: 73288 00:11:48.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73288 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73288' 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73288 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73288 ']' 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.269 10:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.269 [2024-11-15 10:40:09.274595] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:11:48.269 [2024-11-15 10:40:09.275015] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.527 [2024-11-15 10:40:09.460373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.527 [2024-11-15 10:40:09.591401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.785 [2024-11-15 10:40:09.798659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.785 [2024-11-15 10:40:09.798979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.441 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.441 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:49.441 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:49.441 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.441 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.441 [2024-11-15 10:40:10.236829] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.441 [2024-11-15 10:40:10.237025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.441 [2024-11-15 10:40:10.237054] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.441 [2024-11-15 10:40:10.237072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.441 [2024-11-15 10:40:10.237083] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:49.441 [2024-11-15 10:40:10.237111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:49.441 [2024-11-15 10:40:10.237121] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:49.441 [2024-11-15 10:40:10.237135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:49.441 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.441 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.442 "name": "Existed_Raid", 00:11:49.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.442 "strip_size_kb": 0, 00:11:49.442 "state": "configuring", 00:11:49.442 "raid_level": "raid1", 00:11:49.442 "superblock": false, 00:11:49.442 "num_base_bdevs": 4, 00:11:49.442 "num_base_bdevs_discovered": 0, 00:11:49.442 "num_base_bdevs_operational": 4, 00:11:49.442 "base_bdevs_list": [ 00:11:49.442 { 00:11:49.442 "name": "BaseBdev1", 00:11:49.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.442 "is_configured": false, 00:11:49.442 "data_offset": 0, 00:11:49.442 "data_size": 0 00:11:49.442 }, 00:11:49.442 { 00:11:49.442 "name": "BaseBdev2", 00:11:49.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.442 "is_configured": false, 00:11:49.442 "data_offset": 0, 00:11:49.442 "data_size": 0 00:11:49.442 }, 00:11:49.442 { 00:11:49.442 "name": "BaseBdev3", 00:11:49.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.442 "is_configured": false, 00:11:49.442 "data_offset": 0, 00:11:49.442 "data_size": 0 00:11:49.442 }, 00:11:49.442 { 00:11:49.442 "name": "BaseBdev4", 00:11:49.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.442 "is_configured": false, 00:11:49.442 "data_offset": 0, 00:11:49.442 "data_size": 0 00:11:49.442 } 00:11:49.442 ] 00:11:49.442 }' 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.442 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.700 [2024-11-15 10:40:10.724888] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.700 [2024-11-15 10:40:10.725062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.700 [2024-11-15 10:40:10.732874] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.700 [2024-11-15 10:40:10.733553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.700 [2024-11-15 10:40:10.733694] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.700 [2024-11-15 10:40:10.733845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.700 [2024-11-15 10:40:10.734021] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:49.700 [2024-11-15 10:40:10.734161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:49.700 [2024-11-15 10:40:10.734369] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:49.700 [2024-11-15 10:40:10.734489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.700 [2024-11-15 10:40:10.777541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:49.700 BaseBdev1 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.700 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.700 [ 00:11:49.700 { 00:11:49.700 "name": "BaseBdev1", 00:11:49.700 "aliases": [ 00:11:49.700 "53768107-defe-4bc2-bb37-1d9ea19bce0b" 00:11:49.700 ], 00:11:49.700 "product_name": "Malloc disk", 00:11:49.700 "block_size": 512, 00:11:49.700 "num_blocks": 65536, 00:11:49.700 "uuid": "53768107-defe-4bc2-bb37-1d9ea19bce0b", 00:11:49.700 "assigned_rate_limits": { 00:11:49.700 "rw_ios_per_sec": 0, 00:11:49.700 "rw_mbytes_per_sec": 0, 00:11:49.700 "r_mbytes_per_sec": 0, 00:11:49.700 "w_mbytes_per_sec": 0 00:11:49.700 }, 00:11:49.700 "claimed": true, 00:11:49.700 "claim_type": "exclusive_write", 00:11:49.700 "zoned": false, 00:11:49.700 "supported_io_types": { 00:11:49.700 "read": true, 00:11:49.700 "write": true, 00:11:49.700 "unmap": true, 00:11:49.700 "flush": true, 00:11:49.700 "reset": true, 00:11:49.700 "nvme_admin": false, 00:11:49.700 "nvme_io": false, 00:11:49.700 "nvme_io_md": false, 00:11:49.700 "write_zeroes": true, 00:11:49.700 "zcopy": true, 00:11:49.700 "get_zone_info": false, 00:11:49.700 "zone_management": false, 00:11:49.700 "zone_append": false, 00:11:49.700 "compare": false, 00:11:49.700 "compare_and_write": false, 00:11:49.700 "abort": true, 00:11:49.700 "seek_hole": false, 00:11:49.700 "seek_data": false, 00:11:49.700 "copy": true, 00:11:49.700 "nvme_iov_md": false 00:11:49.700 }, 00:11:49.700 "memory_domains": [ 00:11:49.700 { 00:11:49.700 "dma_device_id": "system", 00:11:49.700 "dma_device_type": 1 00:11:49.700 }, 00:11:49.700 { 00:11:49.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.701 "dma_device_type": 2 00:11:49.701 } 00:11:49.701 ], 00:11:49.701 "driver_specific": {} 00:11:49.701 } 00:11:49.701 ] 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.701 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.959 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.959 "name": "Existed_Raid", 00:11:49.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.959 "strip_size_kb": 0, 00:11:49.959 "state": "configuring", 00:11:49.959 "raid_level": "raid1", 00:11:49.959 "superblock": false, 00:11:49.959 "num_base_bdevs": 4, 00:11:49.959 "num_base_bdevs_discovered": 1, 00:11:49.959 "num_base_bdevs_operational": 4, 00:11:49.959 "base_bdevs_list": [ 00:11:49.959 { 00:11:49.959 "name": "BaseBdev1", 00:11:49.959 "uuid": "53768107-defe-4bc2-bb37-1d9ea19bce0b", 00:11:49.959 "is_configured": true, 00:11:49.959 "data_offset": 0, 00:11:49.959 "data_size": 65536 00:11:49.959 }, 00:11:49.959 { 00:11:49.959 "name": "BaseBdev2", 00:11:49.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.959 "is_configured": false, 00:11:49.959 "data_offset": 0, 00:11:49.959 "data_size": 0 00:11:49.959 }, 00:11:49.959 { 00:11:49.959 "name": "BaseBdev3", 00:11:49.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.959 "is_configured": false, 00:11:49.959 "data_offset": 0, 00:11:49.959 "data_size": 0 00:11:49.959 }, 00:11:49.959 { 00:11:49.959 "name": "BaseBdev4", 00:11:49.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.959 "is_configured": false, 00:11:49.959 "data_offset": 0, 00:11:49.959 "data_size": 0 00:11:49.959 } 00:11:49.959 ] 00:11:49.959 }' 00:11:49.959 10:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.959 10:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.217 [2024-11-15 10:40:11.333750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.217 [2024-11-15 10:40:11.333943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.217 [2024-11-15 10:40:11.345832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.217 [2024-11-15 10:40:11.348328] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.217 [2024-11-15 10:40:11.348503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.217 [2024-11-15 10:40:11.348631] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:50.217 [2024-11-15 10:40:11.348694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:50.217 [2024-11-15 10:40:11.348916] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:50.217 [2024-11-15 10:40:11.348976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.217 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.218 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.218 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.218 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.218 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.218 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.218 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.218 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.218 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.218 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.476 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.476 "name": "Existed_Raid", 00:11:50.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.476 "strip_size_kb": 0, 00:11:50.476 "state": "configuring", 00:11:50.476 "raid_level": "raid1", 00:11:50.476 "superblock": false, 00:11:50.476 "num_base_bdevs": 4, 00:11:50.476 "num_base_bdevs_discovered": 1, 00:11:50.476 "num_base_bdevs_operational": 4, 00:11:50.476 "base_bdevs_list": [ 00:11:50.476 { 00:11:50.476 "name": "BaseBdev1", 00:11:50.476 "uuid": "53768107-defe-4bc2-bb37-1d9ea19bce0b", 00:11:50.476 "is_configured": true, 00:11:50.476 "data_offset": 0, 00:11:50.476 "data_size": 65536 00:11:50.476 }, 00:11:50.476 { 00:11:50.476 "name": "BaseBdev2", 00:11:50.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.476 "is_configured": false, 00:11:50.476 "data_offset": 0, 00:11:50.476 "data_size": 0 00:11:50.476 }, 00:11:50.476 { 00:11:50.476 "name": "BaseBdev3", 00:11:50.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.476 "is_configured": false, 00:11:50.476 "data_offset": 0, 00:11:50.476 "data_size": 0 00:11:50.476 }, 00:11:50.476 { 00:11:50.476 "name": "BaseBdev4", 00:11:50.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.476 "is_configured": false, 00:11:50.476 "data_offset": 0, 00:11:50.476 "data_size": 0 00:11:50.476 } 00:11:50.476 ] 00:11:50.476 }' 00:11:50.476 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.476 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.734 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:50.734 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.734 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.993 [2024-11-15 10:40:11.900203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.993 BaseBdev2 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.993 [ 00:11:50.993 { 00:11:50.993 "name": "BaseBdev2", 00:11:50.993 "aliases": [ 00:11:50.993 "4bc3a9a0-9cf9-4da7-bbab-744cba5614d4" 00:11:50.993 ], 00:11:50.993 "product_name": "Malloc disk", 00:11:50.993 "block_size": 512, 00:11:50.993 "num_blocks": 65536, 00:11:50.993 "uuid": "4bc3a9a0-9cf9-4da7-bbab-744cba5614d4", 00:11:50.993 "assigned_rate_limits": { 00:11:50.993 "rw_ios_per_sec": 0, 00:11:50.993 "rw_mbytes_per_sec": 0, 00:11:50.993 "r_mbytes_per_sec": 0, 00:11:50.993 "w_mbytes_per_sec": 0 00:11:50.993 }, 00:11:50.993 "claimed": true, 00:11:50.993 "claim_type": "exclusive_write", 00:11:50.993 "zoned": false, 00:11:50.993 "supported_io_types": { 00:11:50.993 "read": true, 00:11:50.993 "write": true, 00:11:50.993 "unmap": true, 00:11:50.993 "flush": true, 00:11:50.993 "reset": true, 00:11:50.993 "nvme_admin": false, 00:11:50.993 "nvme_io": false, 00:11:50.993 "nvme_io_md": false, 00:11:50.993 "write_zeroes": true, 00:11:50.993 "zcopy": true, 00:11:50.993 "get_zone_info": false, 00:11:50.993 "zone_management": false, 00:11:50.993 "zone_append": false, 00:11:50.993 "compare": false, 00:11:50.993 "compare_and_write": false, 00:11:50.993 "abort": true, 00:11:50.993 "seek_hole": false, 00:11:50.993 "seek_data": false, 00:11:50.993 "copy": true, 00:11:50.993 "nvme_iov_md": false 00:11:50.993 }, 00:11:50.993 "memory_domains": [ 00:11:50.993 { 00:11:50.993 "dma_device_id": "system", 00:11:50.993 "dma_device_type": 1 00:11:50.993 }, 00:11:50.993 { 00:11:50.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.993 "dma_device_type": 2 00:11:50.993 } 00:11:50.993 ], 00:11:50.993 "driver_specific": {} 00:11:50.993 } 00:11:50.993 ] 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.993 "name": "Existed_Raid", 00:11:50.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.993 "strip_size_kb": 0, 00:11:50.993 "state": "configuring", 00:11:50.993 "raid_level": "raid1", 00:11:50.993 "superblock": false, 00:11:50.993 "num_base_bdevs": 4, 00:11:50.993 "num_base_bdevs_discovered": 2, 00:11:50.993 "num_base_bdevs_operational": 4, 00:11:50.993 "base_bdevs_list": [ 00:11:50.993 { 00:11:50.993 "name": "BaseBdev1", 00:11:50.993 "uuid": "53768107-defe-4bc2-bb37-1d9ea19bce0b", 00:11:50.993 "is_configured": true, 00:11:50.993 "data_offset": 0, 00:11:50.993 "data_size": 65536 00:11:50.993 }, 00:11:50.993 { 00:11:50.993 "name": "BaseBdev2", 00:11:50.993 "uuid": "4bc3a9a0-9cf9-4da7-bbab-744cba5614d4", 00:11:50.993 "is_configured": true, 00:11:50.993 "data_offset": 0, 00:11:50.993 "data_size": 65536 00:11:50.993 }, 00:11:50.993 { 00:11:50.993 "name": "BaseBdev3", 00:11:50.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.993 "is_configured": false, 00:11:50.993 "data_offset": 0, 00:11:50.993 "data_size": 0 00:11:50.993 }, 00:11:50.993 { 00:11:50.993 "name": "BaseBdev4", 00:11:50.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.993 "is_configured": false, 00:11:50.993 "data_offset": 0, 00:11:50.993 "data_size": 0 00:11:50.993 } 00:11:50.993 ] 00:11:50.993 }' 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.993 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.561 [2024-11-15 10:40:12.496988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.561 BaseBdev3 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.561 [ 00:11:51.561 { 00:11:51.561 "name": "BaseBdev3", 00:11:51.561 "aliases": [ 00:11:51.561 "ea590c59-b795-4d1e-9436-1d78e5e30dd5" 00:11:51.561 ], 00:11:51.561 "product_name": "Malloc disk", 00:11:51.561 "block_size": 512, 00:11:51.561 "num_blocks": 65536, 00:11:51.561 "uuid": "ea590c59-b795-4d1e-9436-1d78e5e30dd5", 00:11:51.561 "assigned_rate_limits": { 00:11:51.561 "rw_ios_per_sec": 0, 00:11:51.561 "rw_mbytes_per_sec": 0, 00:11:51.561 "r_mbytes_per_sec": 0, 00:11:51.561 "w_mbytes_per_sec": 0 00:11:51.561 }, 00:11:51.561 "claimed": true, 00:11:51.561 "claim_type": "exclusive_write", 00:11:51.561 "zoned": false, 00:11:51.561 "supported_io_types": { 00:11:51.561 "read": true, 00:11:51.561 "write": true, 00:11:51.561 "unmap": true, 00:11:51.561 "flush": true, 00:11:51.561 "reset": true, 00:11:51.561 "nvme_admin": false, 00:11:51.561 "nvme_io": false, 00:11:51.561 "nvme_io_md": false, 00:11:51.561 "write_zeroes": true, 00:11:51.561 "zcopy": true, 00:11:51.561 "get_zone_info": false, 00:11:51.561 "zone_management": false, 00:11:51.561 "zone_append": false, 00:11:51.561 "compare": false, 00:11:51.561 "compare_and_write": false, 00:11:51.561 "abort": true, 00:11:51.561 "seek_hole": false, 00:11:51.561 "seek_data": false, 00:11:51.561 "copy": true, 00:11:51.561 "nvme_iov_md": false 00:11:51.561 }, 00:11:51.561 "memory_domains": [ 00:11:51.561 { 00:11:51.561 "dma_device_id": "system", 00:11:51.561 "dma_device_type": 1 00:11:51.561 }, 00:11:51.561 { 00:11:51.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.561 "dma_device_type": 2 00:11:51.561 } 00:11:51.561 ], 00:11:51.561 "driver_specific": {} 00:11:51.561 } 00:11:51.561 ] 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.561 "name": "Existed_Raid", 00:11:51.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.561 "strip_size_kb": 0, 00:11:51.561 "state": "configuring", 00:11:51.561 "raid_level": "raid1", 00:11:51.561 "superblock": false, 00:11:51.561 "num_base_bdevs": 4, 00:11:51.561 "num_base_bdevs_discovered": 3, 00:11:51.561 "num_base_bdevs_operational": 4, 00:11:51.561 "base_bdevs_list": [ 00:11:51.561 { 00:11:51.561 "name": "BaseBdev1", 00:11:51.561 "uuid": "53768107-defe-4bc2-bb37-1d9ea19bce0b", 00:11:51.561 "is_configured": true, 00:11:51.561 "data_offset": 0, 00:11:51.561 "data_size": 65536 00:11:51.561 }, 00:11:51.561 { 00:11:51.561 "name": "BaseBdev2", 00:11:51.561 "uuid": "4bc3a9a0-9cf9-4da7-bbab-744cba5614d4", 00:11:51.561 "is_configured": true, 00:11:51.561 "data_offset": 0, 00:11:51.561 "data_size": 65536 00:11:51.561 }, 00:11:51.561 { 00:11:51.561 "name": "BaseBdev3", 00:11:51.561 "uuid": "ea590c59-b795-4d1e-9436-1d78e5e30dd5", 00:11:51.561 "is_configured": true, 00:11:51.561 "data_offset": 0, 00:11:51.561 "data_size": 65536 00:11:51.561 }, 00:11:51.561 { 00:11:51.561 "name": "BaseBdev4", 00:11:51.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.561 "is_configured": false, 00:11:51.561 "data_offset": 0, 00:11:51.561 "data_size": 0 00:11:51.561 } 00:11:51.561 ] 00:11:51.561 }' 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.561 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.128 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:52.128 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.128 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.128 [2024-11-15 10:40:13.095396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:52.128 [2024-11-15 10:40:13.095664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:52.128 [2024-11-15 10:40:13.095689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:52.128 [2024-11-15 10:40:13.096042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:52.128 [2024-11-15 10:40:13.096266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:52.128 [2024-11-15 10:40:13.096288] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:52.128 [2024-11-15 10:40:13.096615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.128 BaseBdev4 00:11:52.128 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.128 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:52.128 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:52.128 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.128 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:52.128 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.128 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.128 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.128 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.128 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.129 [ 00:11:52.129 { 00:11:52.129 "name": "BaseBdev4", 00:11:52.129 "aliases": [ 00:11:52.129 "129ca288-8009-4278-96f2-167db725e76a" 00:11:52.129 ], 00:11:52.129 "product_name": "Malloc disk", 00:11:52.129 "block_size": 512, 00:11:52.129 "num_blocks": 65536, 00:11:52.129 "uuid": "129ca288-8009-4278-96f2-167db725e76a", 00:11:52.129 "assigned_rate_limits": { 00:11:52.129 "rw_ios_per_sec": 0, 00:11:52.129 "rw_mbytes_per_sec": 0, 00:11:52.129 "r_mbytes_per_sec": 0, 00:11:52.129 "w_mbytes_per_sec": 0 00:11:52.129 }, 00:11:52.129 "claimed": true, 00:11:52.129 "claim_type": "exclusive_write", 00:11:52.129 "zoned": false, 00:11:52.129 "supported_io_types": { 00:11:52.129 "read": true, 00:11:52.129 "write": true, 00:11:52.129 "unmap": true, 00:11:52.129 "flush": true, 00:11:52.129 "reset": true, 00:11:52.129 "nvme_admin": false, 00:11:52.129 "nvme_io": false, 00:11:52.129 "nvme_io_md": false, 00:11:52.129 "write_zeroes": true, 00:11:52.129 "zcopy": true, 00:11:52.129 "get_zone_info": false, 00:11:52.129 "zone_management": false, 00:11:52.129 "zone_append": false, 00:11:52.129 "compare": false, 00:11:52.129 "compare_and_write": false, 00:11:52.129 "abort": true, 00:11:52.129 "seek_hole": false, 00:11:52.129 "seek_data": false, 00:11:52.129 "copy": true, 00:11:52.129 "nvme_iov_md": false 00:11:52.129 }, 00:11:52.129 "memory_domains": [ 00:11:52.129 { 00:11:52.129 "dma_device_id": "system", 00:11:52.129 "dma_device_type": 1 00:11:52.129 }, 00:11:52.129 { 00:11:52.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.129 "dma_device_type": 2 00:11:52.129 } 00:11:52.129 ], 00:11:52.129 "driver_specific": {} 00:11:52.129 } 00:11:52.129 ] 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.129 "name": "Existed_Raid", 00:11:52.129 "uuid": "84e4243a-27a7-405d-96c6-647c6d1678a9", 00:11:52.129 "strip_size_kb": 0, 00:11:52.129 "state": "online", 00:11:52.129 "raid_level": "raid1", 00:11:52.129 "superblock": false, 00:11:52.129 "num_base_bdevs": 4, 00:11:52.129 "num_base_bdevs_discovered": 4, 00:11:52.129 "num_base_bdevs_operational": 4, 00:11:52.129 "base_bdevs_list": [ 00:11:52.129 { 00:11:52.129 "name": "BaseBdev1", 00:11:52.129 "uuid": "53768107-defe-4bc2-bb37-1d9ea19bce0b", 00:11:52.129 "is_configured": true, 00:11:52.129 "data_offset": 0, 00:11:52.129 "data_size": 65536 00:11:52.129 }, 00:11:52.129 { 00:11:52.129 "name": "BaseBdev2", 00:11:52.129 "uuid": "4bc3a9a0-9cf9-4da7-bbab-744cba5614d4", 00:11:52.129 "is_configured": true, 00:11:52.129 "data_offset": 0, 00:11:52.129 "data_size": 65536 00:11:52.129 }, 00:11:52.129 { 00:11:52.129 "name": "BaseBdev3", 00:11:52.129 "uuid": "ea590c59-b795-4d1e-9436-1d78e5e30dd5", 00:11:52.129 "is_configured": true, 00:11:52.129 "data_offset": 0, 00:11:52.129 "data_size": 65536 00:11:52.129 }, 00:11:52.129 { 00:11:52.129 "name": "BaseBdev4", 00:11:52.129 "uuid": "129ca288-8009-4278-96f2-167db725e76a", 00:11:52.129 "is_configured": true, 00:11:52.129 "data_offset": 0, 00:11:52.129 "data_size": 65536 00:11:52.129 } 00:11:52.129 ] 00:11:52.129 }' 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.129 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.695 [2024-11-15 10:40:13.632024] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:52.695 "name": "Existed_Raid", 00:11:52.695 "aliases": [ 00:11:52.695 "84e4243a-27a7-405d-96c6-647c6d1678a9" 00:11:52.695 ], 00:11:52.695 "product_name": "Raid Volume", 00:11:52.695 "block_size": 512, 00:11:52.695 "num_blocks": 65536, 00:11:52.695 "uuid": "84e4243a-27a7-405d-96c6-647c6d1678a9", 00:11:52.695 "assigned_rate_limits": { 00:11:52.695 "rw_ios_per_sec": 0, 00:11:52.695 "rw_mbytes_per_sec": 0, 00:11:52.695 "r_mbytes_per_sec": 0, 00:11:52.695 "w_mbytes_per_sec": 0 00:11:52.695 }, 00:11:52.695 "claimed": false, 00:11:52.695 "zoned": false, 00:11:52.695 "supported_io_types": { 00:11:52.695 "read": true, 00:11:52.695 "write": true, 00:11:52.695 "unmap": false, 00:11:52.695 "flush": false, 00:11:52.695 "reset": true, 00:11:52.695 "nvme_admin": false, 00:11:52.695 "nvme_io": false, 00:11:52.695 "nvme_io_md": false, 00:11:52.695 "write_zeroes": true, 00:11:52.695 "zcopy": false, 00:11:52.695 "get_zone_info": false, 00:11:52.695 "zone_management": false, 00:11:52.695 "zone_append": false, 00:11:52.695 "compare": false, 00:11:52.695 "compare_and_write": false, 00:11:52.695 "abort": false, 00:11:52.695 "seek_hole": false, 00:11:52.695 "seek_data": false, 00:11:52.695 "copy": false, 00:11:52.695 "nvme_iov_md": false 00:11:52.695 }, 00:11:52.695 "memory_domains": [ 00:11:52.695 { 00:11:52.695 "dma_device_id": "system", 00:11:52.695 "dma_device_type": 1 00:11:52.695 }, 00:11:52.695 { 00:11:52.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.695 "dma_device_type": 2 00:11:52.695 }, 00:11:52.695 { 00:11:52.695 "dma_device_id": "system", 00:11:52.695 "dma_device_type": 1 00:11:52.695 }, 00:11:52.695 { 00:11:52.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.695 "dma_device_type": 2 00:11:52.695 }, 00:11:52.695 { 00:11:52.695 "dma_device_id": "system", 00:11:52.695 "dma_device_type": 1 00:11:52.695 }, 00:11:52.695 { 00:11:52.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.695 "dma_device_type": 2 00:11:52.695 }, 00:11:52.695 { 00:11:52.695 "dma_device_id": "system", 00:11:52.695 "dma_device_type": 1 00:11:52.695 }, 00:11:52.695 { 00:11:52.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.695 "dma_device_type": 2 00:11:52.695 } 00:11:52.695 ], 00:11:52.695 "driver_specific": { 00:11:52.695 "raid": { 00:11:52.695 "uuid": "84e4243a-27a7-405d-96c6-647c6d1678a9", 00:11:52.695 "strip_size_kb": 0, 00:11:52.695 "state": "online", 00:11:52.695 "raid_level": "raid1", 00:11:52.695 "superblock": false, 00:11:52.695 "num_base_bdevs": 4, 00:11:52.695 "num_base_bdevs_discovered": 4, 00:11:52.695 "num_base_bdevs_operational": 4, 00:11:52.695 "base_bdevs_list": [ 00:11:52.695 { 00:11:52.695 "name": "BaseBdev1", 00:11:52.695 "uuid": "53768107-defe-4bc2-bb37-1d9ea19bce0b", 00:11:52.695 "is_configured": true, 00:11:52.695 "data_offset": 0, 00:11:52.695 "data_size": 65536 00:11:52.695 }, 00:11:52.695 { 00:11:52.695 "name": "BaseBdev2", 00:11:52.695 "uuid": "4bc3a9a0-9cf9-4da7-bbab-744cba5614d4", 00:11:52.695 "is_configured": true, 00:11:52.695 "data_offset": 0, 00:11:52.695 "data_size": 65536 00:11:52.695 }, 00:11:52.695 { 00:11:52.695 "name": "BaseBdev3", 00:11:52.695 "uuid": "ea590c59-b795-4d1e-9436-1d78e5e30dd5", 00:11:52.695 "is_configured": true, 00:11:52.695 "data_offset": 0, 00:11:52.695 "data_size": 65536 00:11:52.695 }, 00:11:52.695 { 00:11:52.695 "name": "BaseBdev4", 00:11:52.695 "uuid": "129ca288-8009-4278-96f2-167db725e76a", 00:11:52.695 "is_configured": true, 00:11:52.695 "data_offset": 0, 00:11:52.695 "data_size": 65536 00:11:52.695 } 00:11:52.695 ] 00:11:52.695 } 00:11:52.695 } 00:11:52.695 }' 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:52.695 BaseBdev2 00:11:52.695 BaseBdev3 00:11:52.695 BaseBdev4' 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.695 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.954 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.954 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.954 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.954 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:52.954 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.954 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.954 [2024-11-15 10:40:14.031796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.277 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.277 "name": "Existed_Raid", 00:11:53.277 "uuid": "84e4243a-27a7-405d-96c6-647c6d1678a9", 00:11:53.277 "strip_size_kb": 0, 00:11:53.277 "state": "online", 00:11:53.277 "raid_level": "raid1", 00:11:53.277 "superblock": false, 00:11:53.277 "num_base_bdevs": 4, 00:11:53.277 "num_base_bdevs_discovered": 3, 00:11:53.277 "num_base_bdevs_operational": 3, 00:11:53.277 "base_bdevs_list": [ 00:11:53.277 { 00:11:53.277 "name": null, 00:11:53.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.277 "is_configured": false, 00:11:53.277 "data_offset": 0, 00:11:53.277 "data_size": 65536 00:11:53.277 }, 00:11:53.277 { 00:11:53.277 "name": "BaseBdev2", 00:11:53.277 "uuid": "4bc3a9a0-9cf9-4da7-bbab-744cba5614d4", 00:11:53.277 "is_configured": true, 00:11:53.277 "data_offset": 0, 00:11:53.277 "data_size": 65536 00:11:53.277 }, 00:11:53.277 { 00:11:53.277 "name": "BaseBdev3", 00:11:53.277 "uuid": "ea590c59-b795-4d1e-9436-1d78e5e30dd5", 00:11:53.277 "is_configured": true, 00:11:53.277 "data_offset": 0, 00:11:53.277 "data_size": 65536 00:11:53.277 }, 00:11:53.277 { 00:11:53.277 "name": "BaseBdev4", 00:11:53.278 "uuid": "129ca288-8009-4278-96f2-167db725e76a", 00:11:53.278 "is_configured": true, 00:11:53.278 "data_offset": 0, 00:11:53.278 "data_size": 65536 00:11:53.278 } 00:11:53.278 ] 00:11:53.278 }' 00:11:53.278 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.278 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.536 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:53.536 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:53.536 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.536 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.536 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:53.536 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.536 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.536 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:53.536 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:53.536 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:53.536 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.536 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.536 [2024-11-15 10:40:14.684732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.793 [2024-11-15 10:40:14.830243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:53.793 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.051 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:54.051 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:54.051 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:54.051 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.051 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.051 [2024-11-15 10:40:14.974562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:54.051 [2024-11-15 10:40:14.974812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.051 [2024-11-15 10:40:15.057880] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.051 [2024-11-15 10:40:15.058136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.051 [2024-11-15 10:40:15.058171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.051 BaseBdev2 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.051 [ 00:11:54.051 { 00:11:54.051 "name": "BaseBdev2", 00:11:54.051 "aliases": [ 00:11:54.051 "0bbe70b9-fdf1-483c-bd0f-6d8ea86dbc88" 00:11:54.051 ], 00:11:54.051 "product_name": "Malloc disk", 00:11:54.051 "block_size": 512, 00:11:54.051 "num_blocks": 65536, 00:11:54.051 "uuid": "0bbe70b9-fdf1-483c-bd0f-6d8ea86dbc88", 00:11:54.051 "assigned_rate_limits": { 00:11:54.051 "rw_ios_per_sec": 0, 00:11:54.051 "rw_mbytes_per_sec": 0, 00:11:54.051 "r_mbytes_per_sec": 0, 00:11:54.051 "w_mbytes_per_sec": 0 00:11:54.051 }, 00:11:54.051 "claimed": false, 00:11:54.051 "zoned": false, 00:11:54.051 "supported_io_types": { 00:11:54.051 "read": true, 00:11:54.051 "write": true, 00:11:54.051 "unmap": true, 00:11:54.051 "flush": true, 00:11:54.051 "reset": true, 00:11:54.051 "nvme_admin": false, 00:11:54.051 "nvme_io": false, 00:11:54.051 "nvme_io_md": false, 00:11:54.051 "write_zeroes": true, 00:11:54.051 "zcopy": true, 00:11:54.051 "get_zone_info": false, 00:11:54.051 "zone_management": false, 00:11:54.051 "zone_append": false, 00:11:54.051 "compare": false, 00:11:54.051 "compare_and_write": false, 00:11:54.051 "abort": true, 00:11:54.051 "seek_hole": false, 00:11:54.051 "seek_data": false, 00:11:54.051 "copy": true, 00:11:54.051 "nvme_iov_md": false 00:11:54.051 }, 00:11:54.051 "memory_domains": [ 00:11:54.051 { 00:11:54.051 "dma_device_id": "system", 00:11:54.051 "dma_device_type": 1 00:11:54.051 }, 00:11:54.051 { 00:11:54.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.051 "dma_device_type": 2 00:11:54.051 } 00:11:54.051 ], 00:11:54.051 "driver_specific": {} 00:11:54.051 } 00:11:54.051 ] 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.051 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.310 BaseBdev3 00:11:54.310 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.310 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:54.310 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:54.310 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.310 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.310 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.310 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.310 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.310 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.310 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.310 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.310 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:54.310 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.310 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.310 [ 00:11:54.310 { 00:11:54.310 "name": "BaseBdev3", 00:11:54.310 "aliases": [ 00:11:54.310 "591c6419-c61f-462b-91a3-532a6ac681a4" 00:11:54.310 ], 00:11:54.310 "product_name": "Malloc disk", 00:11:54.310 "block_size": 512, 00:11:54.310 "num_blocks": 65536, 00:11:54.310 "uuid": "591c6419-c61f-462b-91a3-532a6ac681a4", 00:11:54.310 "assigned_rate_limits": { 00:11:54.310 "rw_ios_per_sec": 0, 00:11:54.310 "rw_mbytes_per_sec": 0, 00:11:54.310 "r_mbytes_per_sec": 0, 00:11:54.310 "w_mbytes_per_sec": 0 00:11:54.310 }, 00:11:54.310 "claimed": false, 00:11:54.310 "zoned": false, 00:11:54.310 "supported_io_types": { 00:11:54.310 "read": true, 00:11:54.310 "write": true, 00:11:54.310 "unmap": true, 00:11:54.310 "flush": true, 00:11:54.310 "reset": true, 00:11:54.310 "nvme_admin": false, 00:11:54.310 "nvme_io": false, 00:11:54.310 "nvme_io_md": false, 00:11:54.310 "write_zeroes": true, 00:11:54.310 "zcopy": true, 00:11:54.310 "get_zone_info": false, 00:11:54.310 "zone_management": false, 00:11:54.310 "zone_append": false, 00:11:54.310 "compare": false, 00:11:54.311 "compare_and_write": false, 00:11:54.311 "abort": true, 00:11:54.311 "seek_hole": false, 00:11:54.311 "seek_data": false, 00:11:54.311 "copy": true, 00:11:54.311 "nvme_iov_md": false 00:11:54.311 }, 00:11:54.311 "memory_domains": [ 00:11:54.311 { 00:11:54.311 "dma_device_id": "system", 00:11:54.311 "dma_device_type": 1 00:11:54.311 }, 00:11:54.311 { 00:11:54.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.311 "dma_device_type": 2 00:11:54.311 } 00:11:54.311 ], 00:11:54.311 "driver_specific": {} 00:11:54.311 } 00:11:54.311 ] 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.311 BaseBdev4 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.311 [ 00:11:54.311 { 00:11:54.311 "name": "BaseBdev4", 00:11:54.311 "aliases": [ 00:11:54.311 "616e0cea-64a1-4b35-a739-f8d3a1f8de8a" 00:11:54.311 ], 00:11:54.311 "product_name": "Malloc disk", 00:11:54.311 "block_size": 512, 00:11:54.311 "num_blocks": 65536, 00:11:54.311 "uuid": "616e0cea-64a1-4b35-a739-f8d3a1f8de8a", 00:11:54.311 "assigned_rate_limits": { 00:11:54.311 "rw_ios_per_sec": 0, 00:11:54.311 "rw_mbytes_per_sec": 0, 00:11:54.311 "r_mbytes_per_sec": 0, 00:11:54.311 "w_mbytes_per_sec": 0 00:11:54.311 }, 00:11:54.311 "claimed": false, 00:11:54.311 "zoned": false, 00:11:54.311 "supported_io_types": { 00:11:54.311 "read": true, 00:11:54.311 "write": true, 00:11:54.311 "unmap": true, 00:11:54.311 "flush": true, 00:11:54.311 "reset": true, 00:11:54.311 "nvme_admin": false, 00:11:54.311 "nvme_io": false, 00:11:54.311 "nvme_io_md": false, 00:11:54.311 "write_zeroes": true, 00:11:54.311 "zcopy": true, 00:11:54.311 "get_zone_info": false, 00:11:54.311 "zone_management": false, 00:11:54.311 "zone_append": false, 00:11:54.311 "compare": false, 00:11:54.311 "compare_and_write": false, 00:11:54.311 "abort": true, 00:11:54.311 "seek_hole": false, 00:11:54.311 "seek_data": false, 00:11:54.311 "copy": true, 00:11:54.311 "nvme_iov_md": false 00:11:54.311 }, 00:11:54.311 "memory_domains": [ 00:11:54.311 { 00:11:54.311 "dma_device_id": "system", 00:11:54.311 "dma_device_type": 1 00:11:54.311 }, 00:11:54.311 { 00:11:54.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.311 "dma_device_type": 2 00:11:54.311 } 00:11:54.311 ], 00:11:54.311 "driver_specific": {} 00:11:54.311 } 00:11:54.311 ] 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.311 [2024-11-15 10:40:15.340735] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:54.311 [2024-11-15 10:40:15.340920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:54.311 [2024-11-15 10:40:15.341054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.311 [2024-11-15 10:40:15.343422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.311 [2024-11-15 10:40:15.343617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.311 "name": "Existed_Raid", 00:11:54.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.311 "strip_size_kb": 0, 00:11:54.311 "state": "configuring", 00:11:54.311 "raid_level": "raid1", 00:11:54.311 "superblock": false, 00:11:54.311 "num_base_bdevs": 4, 00:11:54.311 "num_base_bdevs_discovered": 3, 00:11:54.311 "num_base_bdevs_operational": 4, 00:11:54.311 "base_bdevs_list": [ 00:11:54.311 { 00:11:54.311 "name": "BaseBdev1", 00:11:54.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.311 "is_configured": false, 00:11:54.311 "data_offset": 0, 00:11:54.311 "data_size": 0 00:11:54.311 }, 00:11:54.311 { 00:11:54.311 "name": "BaseBdev2", 00:11:54.311 "uuid": "0bbe70b9-fdf1-483c-bd0f-6d8ea86dbc88", 00:11:54.311 "is_configured": true, 00:11:54.311 "data_offset": 0, 00:11:54.311 "data_size": 65536 00:11:54.311 }, 00:11:54.311 { 00:11:54.311 "name": "BaseBdev3", 00:11:54.311 "uuid": "591c6419-c61f-462b-91a3-532a6ac681a4", 00:11:54.311 "is_configured": true, 00:11:54.311 "data_offset": 0, 00:11:54.311 "data_size": 65536 00:11:54.311 }, 00:11:54.311 { 00:11:54.311 "name": "BaseBdev4", 00:11:54.311 "uuid": "616e0cea-64a1-4b35-a739-f8d3a1f8de8a", 00:11:54.311 "is_configured": true, 00:11:54.311 "data_offset": 0, 00:11:54.311 "data_size": 65536 00:11:54.311 } 00:11:54.311 ] 00:11:54.311 }' 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.311 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.879 [2024-11-15 10:40:15.860904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.879 "name": "Existed_Raid", 00:11:54.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.879 "strip_size_kb": 0, 00:11:54.879 "state": "configuring", 00:11:54.879 "raid_level": "raid1", 00:11:54.879 "superblock": false, 00:11:54.879 "num_base_bdevs": 4, 00:11:54.879 "num_base_bdevs_discovered": 2, 00:11:54.879 "num_base_bdevs_operational": 4, 00:11:54.879 "base_bdevs_list": [ 00:11:54.879 { 00:11:54.879 "name": "BaseBdev1", 00:11:54.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.879 "is_configured": false, 00:11:54.879 "data_offset": 0, 00:11:54.879 "data_size": 0 00:11:54.879 }, 00:11:54.879 { 00:11:54.879 "name": null, 00:11:54.879 "uuid": "0bbe70b9-fdf1-483c-bd0f-6d8ea86dbc88", 00:11:54.879 "is_configured": false, 00:11:54.879 "data_offset": 0, 00:11:54.879 "data_size": 65536 00:11:54.879 }, 00:11:54.879 { 00:11:54.879 "name": "BaseBdev3", 00:11:54.879 "uuid": "591c6419-c61f-462b-91a3-532a6ac681a4", 00:11:54.879 "is_configured": true, 00:11:54.879 "data_offset": 0, 00:11:54.879 "data_size": 65536 00:11:54.879 }, 00:11:54.879 { 00:11:54.879 "name": "BaseBdev4", 00:11:54.879 "uuid": "616e0cea-64a1-4b35-a739-f8d3a1f8de8a", 00:11:54.879 "is_configured": true, 00:11:54.879 "data_offset": 0, 00:11:54.879 "data_size": 65536 00:11:54.879 } 00:11:54.879 ] 00:11:54.879 }' 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.879 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.446 [2024-11-15 10:40:16.462712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.446 BaseBdev1 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.446 [ 00:11:55.446 { 00:11:55.446 "name": "BaseBdev1", 00:11:55.446 "aliases": [ 00:11:55.446 "cb1abe3e-e27c-4338-b952-aba825128460" 00:11:55.446 ], 00:11:55.446 "product_name": "Malloc disk", 00:11:55.446 "block_size": 512, 00:11:55.446 "num_blocks": 65536, 00:11:55.446 "uuid": "cb1abe3e-e27c-4338-b952-aba825128460", 00:11:55.446 "assigned_rate_limits": { 00:11:55.446 "rw_ios_per_sec": 0, 00:11:55.446 "rw_mbytes_per_sec": 0, 00:11:55.446 "r_mbytes_per_sec": 0, 00:11:55.446 "w_mbytes_per_sec": 0 00:11:55.446 }, 00:11:55.446 "claimed": true, 00:11:55.446 "claim_type": "exclusive_write", 00:11:55.446 "zoned": false, 00:11:55.446 "supported_io_types": { 00:11:55.446 "read": true, 00:11:55.446 "write": true, 00:11:55.446 "unmap": true, 00:11:55.446 "flush": true, 00:11:55.446 "reset": true, 00:11:55.446 "nvme_admin": false, 00:11:55.446 "nvme_io": false, 00:11:55.446 "nvme_io_md": false, 00:11:55.446 "write_zeroes": true, 00:11:55.446 "zcopy": true, 00:11:55.446 "get_zone_info": false, 00:11:55.446 "zone_management": false, 00:11:55.446 "zone_append": false, 00:11:55.446 "compare": false, 00:11:55.446 "compare_and_write": false, 00:11:55.446 "abort": true, 00:11:55.446 "seek_hole": false, 00:11:55.446 "seek_data": false, 00:11:55.446 "copy": true, 00:11:55.446 "nvme_iov_md": false 00:11:55.446 }, 00:11:55.446 "memory_domains": [ 00:11:55.446 { 00:11:55.446 "dma_device_id": "system", 00:11:55.446 "dma_device_type": 1 00:11:55.446 }, 00:11:55.446 { 00:11:55.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.446 "dma_device_type": 2 00:11:55.446 } 00:11:55.446 ], 00:11:55.446 "driver_specific": {} 00:11:55.446 } 00:11:55.446 ] 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.446 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.447 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.447 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.447 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.447 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.447 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.447 "name": "Existed_Raid", 00:11:55.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.447 "strip_size_kb": 0, 00:11:55.447 "state": "configuring", 00:11:55.447 "raid_level": "raid1", 00:11:55.447 "superblock": false, 00:11:55.447 "num_base_bdevs": 4, 00:11:55.447 "num_base_bdevs_discovered": 3, 00:11:55.447 "num_base_bdevs_operational": 4, 00:11:55.447 "base_bdevs_list": [ 00:11:55.447 { 00:11:55.447 "name": "BaseBdev1", 00:11:55.447 "uuid": "cb1abe3e-e27c-4338-b952-aba825128460", 00:11:55.447 "is_configured": true, 00:11:55.447 "data_offset": 0, 00:11:55.447 "data_size": 65536 00:11:55.447 }, 00:11:55.447 { 00:11:55.447 "name": null, 00:11:55.447 "uuid": "0bbe70b9-fdf1-483c-bd0f-6d8ea86dbc88", 00:11:55.447 "is_configured": false, 00:11:55.447 "data_offset": 0, 00:11:55.447 "data_size": 65536 00:11:55.447 }, 00:11:55.447 { 00:11:55.447 "name": "BaseBdev3", 00:11:55.447 "uuid": "591c6419-c61f-462b-91a3-532a6ac681a4", 00:11:55.447 "is_configured": true, 00:11:55.447 "data_offset": 0, 00:11:55.447 "data_size": 65536 00:11:55.447 }, 00:11:55.447 { 00:11:55.447 "name": "BaseBdev4", 00:11:55.447 "uuid": "616e0cea-64a1-4b35-a739-f8d3a1f8de8a", 00:11:55.447 "is_configured": true, 00:11:55.447 "data_offset": 0, 00:11:55.447 "data_size": 65536 00:11:55.447 } 00:11:55.447 ] 00:11:55.447 }' 00:11:55.447 10:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.447 10:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.014 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.014 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:56.014 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.014 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.014 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.014 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.015 [2024-11-15 10:40:17.062954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.015 "name": "Existed_Raid", 00:11:56.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.015 "strip_size_kb": 0, 00:11:56.015 "state": "configuring", 00:11:56.015 "raid_level": "raid1", 00:11:56.015 "superblock": false, 00:11:56.015 "num_base_bdevs": 4, 00:11:56.015 "num_base_bdevs_discovered": 2, 00:11:56.015 "num_base_bdevs_operational": 4, 00:11:56.015 "base_bdevs_list": [ 00:11:56.015 { 00:11:56.015 "name": "BaseBdev1", 00:11:56.015 "uuid": "cb1abe3e-e27c-4338-b952-aba825128460", 00:11:56.015 "is_configured": true, 00:11:56.015 "data_offset": 0, 00:11:56.015 "data_size": 65536 00:11:56.015 }, 00:11:56.015 { 00:11:56.015 "name": null, 00:11:56.015 "uuid": "0bbe70b9-fdf1-483c-bd0f-6d8ea86dbc88", 00:11:56.015 "is_configured": false, 00:11:56.015 "data_offset": 0, 00:11:56.015 "data_size": 65536 00:11:56.015 }, 00:11:56.015 { 00:11:56.015 "name": null, 00:11:56.015 "uuid": "591c6419-c61f-462b-91a3-532a6ac681a4", 00:11:56.015 "is_configured": false, 00:11:56.015 "data_offset": 0, 00:11:56.015 "data_size": 65536 00:11:56.015 }, 00:11:56.015 { 00:11:56.015 "name": "BaseBdev4", 00:11:56.015 "uuid": "616e0cea-64a1-4b35-a739-f8d3a1f8de8a", 00:11:56.015 "is_configured": true, 00:11:56.015 "data_offset": 0, 00:11:56.015 "data_size": 65536 00:11:56.015 } 00:11:56.015 ] 00:11:56.015 }' 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.015 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.582 [2024-11-15 10:40:17.627088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.582 "name": "Existed_Raid", 00:11:56.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.582 "strip_size_kb": 0, 00:11:56.582 "state": "configuring", 00:11:56.582 "raid_level": "raid1", 00:11:56.582 "superblock": false, 00:11:56.582 "num_base_bdevs": 4, 00:11:56.582 "num_base_bdevs_discovered": 3, 00:11:56.582 "num_base_bdevs_operational": 4, 00:11:56.582 "base_bdevs_list": [ 00:11:56.582 { 00:11:56.582 "name": "BaseBdev1", 00:11:56.582 "uuid": "cb1abe3e-e27c-4338-b952-aba825128460", 00:11:56.582 "is_configured": true, 00:11:56.582 "data_offset": 0, 00:11:56.582 "data_size": 65536 00:11:56.582 }, 00:11:56.582 { 00:11:56.582 "name": null, 00:11:56.582 "uuid": "0bbe70b9-fdf1-483c-bd0f-6d8ea86dbc88", 00:11:56.582 "is_configured": false, 00:11:56.582 "data_offset": 0, 00:11:56.582 "data_size": 65536 00:11:56.582 }, 00:11:56.582 { 00:11:56.582 "name": "BaseBdev3", 00:11:56.582 "uuid": "591c6419-c61f-462b-91a3-532a6ac681a4", 00:11:56.582 "is_configured": true, 00:11:56.582 "data_offset": 0, 00:11:56.582 "data_size": 65536 00:11:56.582 }, 00:11:56.582 { 00:11:56.582 "name": "BaseBdev4", 00:11:56.582 "uuid": "616e0cea-64a1-4b35-a739-f8d3a1f8de8a", 00:11:56.582 "is_configured": true, 00:11:56.582 "data_offset": 0, 00:11:56.582 "data_size": 65536 00:11:56.582 } 00:11:56.582 ] 00:11:56.582 }' 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.582 10:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.149 [2024-11-15 10:40:18.199288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.149 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.411 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.411 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.411 "name": "Existed_Raid", 00:11:57.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.411 "strip_size_kb": 0, 00:11:57.411 "state": "configuring", 00:11:57.411 "raid_level": "raid1", 00:11:57.411 "superblock": false, 00:11:57.411 "num_base_bdevs": 4, 00:11:57.411 "num_base_bdevs_discovered": 2, 00:11:57.411 "num_base_bdevs_operational": 4, 00:11:57.411 "base_bdevs_list": [ 00:11:57.411 { 00:11:57.411 "name": null, 00:11:57.411 "uuid": "cb1abe3e-e27c-4338-b952-aba825128460", 00:11:57.411 "is_configured": false, 00:11:57.411 "data_offset": 0, 00:11:57.411 "data_size": 65536 00:11:57.411 }, 00:11:57.411 { 00:11:57.411 "name": null, 00:11:57.411 "uuid": "0bbe70b9-fdf1-483c-bd0f-6d8ea86dbc88", 00:11:57.411 "is_configured": false, 00:11:57.411 "data_offset": 0, 00:11:57.411 "data_size": 65536 00:11:57.411 }, 00:11:57.411 { 00:11:57.411 "name": "BaseBdev3", 00:11:57.411 "uuid": "591c6419-c61f-462b-91a3-532a6ac681a4", 00:11:57.411 "is_configured": true, 00:11:57.411 "data_offset": 0, 00:11:57.411 "data_size": 65536 00:11:57.411 }, 00:11:57.411 { 00:11:57.411 "name": "BaseBdev4", 00:11:57.411 "uuid": "616e0cea-64a1-4b35-a739-f8d3a1f8de8a", 00:11:57.411 "is_configured": true, 00:11:57.411 "data_offset": 0, 00:11:57.411 "data_size": 65536 00:11:57.411 } 00:11:57.411 ] 00:11:57.411 }' 00:11:57.411 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.411 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.675 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.675 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.675 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.675 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:57.675 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.934 [2024-11-15 10:40:18.860087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.934 "name": "Existed_Raid", 00:11:57.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.934 "strip_size_kb": 0, 00:11:57.934 "state": "configuring", 00:11:57.934 "raid_level": "raid1", 00:11:57.934 "superblock": false, 00:11:57.934 "num_base_bdevs": 4, 00:11:57.934 "num_base_bdevs_discovered": 3, 00:11:57.934 "num_base_bdevs_operational": 4, 00:11:57.934 "base_bdevs_list": [ 00:11:57.934 { 00:11:57.934 "name": null, 00:11:57.934 "uuid": "cb1abe3e-e27c-4338-b952-aba825128460", 00:11:57.934 "is_configured": false, 00:11:57.934 "data_offset": 0, 00:11:57.934 "data_size": 65536 00:11:57.934 }, 00:11:57.934 { 00:11:57.934 "name": "BaseBdev2", 00:11:57.934 "uuid": "0bbe70b9-fdf1-483c-bd0f-6d8ea86dbc88", 00:11:57.934 "is_configured": true, 00:11:57.934 "data_offset": 0, 00:11:57.934 "data_size": 65536 00:11:57.934 }, 00:11:57.934 { 00:11:57.934 "name": "BaseBdev3", 00:11:57.934 "uuid": "591c6419-c61f-462b-91a3-532a6ac681a4", 00:11:57.934 "is_configured": true, 00:11:57.934 "data_offset": 0, 00:11:57.934 "data_size": 65536 00:11:57.934 }, 00:11:57.934 { 00:11:57.934 "name": "BaseBdev4", 00:11:57.934 "uuid": "616e0cea-64a1-4b35-a739-f8d3a1f8de8a", 00:11:57.934 "is_configured": true, 00:11:57.934 "data_offset": 0, 00:11:57.934 "data_size": 65536 00:11:57.934 } 00:11:57.934 ] 00:11:57.934 }' 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.934 10:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cb1abe3e-e27c-4338-b952-aba825128460 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.501 [2024-11-15 10:40:19.505728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:58.501 [2024-11-15 10:40:19.505784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:58.501 [2024-11-15 10:40:19.505800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:58.501 [2024-11-15 10:40:19.506129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:58.501 [2024-11-15 10:40:19.506333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:58.501 [2024-11-15 10:40:19.506350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:58.501 [2024-11-15 10:40:19.506680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.501 NewBaseBdev 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.501 [ 00:11:58.501 { 00:11:58.501 "name": "NewBaseBdev", 00:11:58.501 "aliases": [ 00:11:58.501 "cb1abe3e-e27c-4338-b952-aba825128460" 00:11:58.501 ], 00:11:58.501 "product_name": "Malloc disk", 00:11:58.501 "block_size": 512, 00:11:58.501 "num_blocks": 65536, 00:11:58.501 "uuid": "cb1abe3e-e27c-4338-b952-aba825128460", 00:11:58.501 "assigned_rate_limits": { 00:11:58.501 "rw_ios_per_sec": 0, 00:11:58.501 "rw_mbytes_per_sec": 0, 00:11:58.501 "r_mbytes_per_sec": 0, 00:11:58.501 "w_mbytes_per_sec": 0 00:11:58.501 }, 00:11:58.501 "claimed": true, 00:11:58.501 "claim_type": "exclusive_write", 00:11:58.501 "zoned": false, 00:11:58.501 "supported_io_types": { 00:11:58.501 "read": true, 00:11:58.501 "write": true, 00:11:58.501 "unmap": true, 00:11:58.501 "flush": true, 00:11:58.501 "reset": true, 00:11:58.501 "nvme_admin": false, 00:11:58.501 "nvme_io": false, 00:11:58.501 "nvme_io_md": false, 00:11:58.501 "write_zeroes": true, 00:11:58.501 "zcopy": true, 00:11:58.501 "get_zone_info": false, 00:11:58.501 "zone_management": false, 00:11:58.501 "zone_append": false, 00:11:58.501 "compare": false, 00:11:58.501 "compare_and_write": false, 00:11:58.501 "abort": true, 00:11:58.501 "seek_hole": false, 00:11:58.501 "seek_data": false, 00:11:58.501 "copy": true, 00:11:58.501 "nvme_iov_md": false 00:11:58.501 }, 00:11:58.501 "memory_domains": [ 00:11:58.501 { 00:11:58.501 "dma_device_id": "system", 00:11:58.501 "dma_device_type": 1 00:11:58.501 }, 00:11:58.501 { 00:11:58.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.501 "dma_device_type": 2 00:11:58.501 } 00:11:58.501 ], 00:11:58.501 "driver_specific": {} 00:11:58.501 } 00:11:58.501 ] 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.501 "name": "Existed_Raid", 00:11:58.501 "uuid": "4f1dec70-f427-4320-91f5-e1a89e6956c3", 00:11:58.501 "strip_size_kb": 0, 00:11:58.501 "state": "online", 00:11:58.501 "raid_level": "raid1", 00:11:58.501 "superblock": false, 00:11:58.501 "num_base_bdevs": 4, 00:11:58.501 "num_base_bdevs_discovered": 4, 00:11:58.501 "num_base_bdevs_operational": 4, 00:11:58.501 "base_bdevs_list": [ 00:11:58.501 { 00:11:58.501 "name": "NewBaseBdev", 00:11:58.501 "uuid": "cb1abe3e-e27c-4338-b952-aba825128460", 00:11:58.501 "is_configured": true, 00:11:58.501 "data_offset": 0, 00:11:58.501 "data_size": 65536 00:11:58.501 }, 00:11:58.501 { 00:11:58.501 "name": "BaseBdev2", 00:11:58.501 "uuid": "0bbe70b9-fdf1-483c-bd0f-6d8ea86dbc88", 00:11:58.501 "is_configured": true, 00:11:58.501 "data_offset": 0, 00:11:58.501 "data_size": 65536 00:11:58.501 }, 00:11:58.501 { 00:11:58.501 "name": "BaseBdev3", 00:11:58.501 "uuid": "591c6419-c61f-462b-91a3-532a6ac681a4", 00:11:58.501 "is_configured": true, 00:11:58.501 "data_offset": 0, 00:11:58.501 "data_size": 65536 00:11:58.501 }, 00:11:58.501 { 00:11:58.501 "name": "BaseBdev4", 00:11:58.501 "uuid": "616e0cea-64a1-4b35-a739-f8d3a1f8de8a", 00:11:58.501 "is_configured": true, 00:11:58.501 "data_offset": 0, 00:11:58.501 "data_size": 65536 00:11:58.501 } 00:11:58.501 ] 00:11:58.501 }' 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.501 10:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.065 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:59.065 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:59.065 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.065 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.065 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.065 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.065 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:59.065 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.065 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.065 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.066 [2024-11-15 10:40:20.038363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.066 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.066 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.066 "name": "Existed_Raid", 00:11:59.066 "aliases": [ 00:11:59.066 "4f1dec70-f427-4320-91f5-e1a89e6956c3" 00:11:59.066 ], 00:11:59.066 "product_name": "Raid Volume", 00:11:59.066 "block_size": 512, 00:11:59.066 "num_blocks": 65536, 00:11:59.066 "uuid": "4f1dec70-f427-4320-91f5-e1a89e6956c3", 00:11:59.066 "assigned_rate_limits": { 00:11:59.066 "rw_ios_per_sec": 0, 00:11:59.066 "rw_mbytes_per_sec": 0, 00:11:59.066 "r_mbytes_per_sec": 0, 00:11:59.066 "w_mbytes_per_sec": 0 00:11:59.066 }, 00:11:59.066 "claimed": false, 00:11:59.066 "zoned": false, 00:11:59.066 "supported_io_types": { 00:11:59.066 "read": true, 00:11:59.066 "write": true, 00:11:59.066 "unmap": false, 00:11:59.066 "flush": false, 00:11:59.066 "reset": true, 00:11:59.066 "nvme_admin": false, 00:11:59.066 "nvme_io": false, 00:11:59.066 "nvme_io_md": false, 00:11:59.066 "write_zeroes": true, 00:11:59.066 "zcopy": false, 00:11:59.066 "get_zone_info": false, 00:11:59.066 "zone_management": false, 00:11:59.066 "zone_append": false, 00:11:59.066 "compare": false, 00:11:59.066 "compare_and_write": false, 00:11:59.066 "abort": false, 00:11:59.066 "seek_hole": false, 00:11:59.066 "seek_data": false, 00:11:59.066 "copy": false, 00:11:59.066 "nvme_iov_md": false 00:11:59.066 }, 00:11:59.066 "memory_domains": [ 00:11:59.066 { 00:11:59.066 "dma_device_id": "system", 00:11:59.066 "dma_device_type": 1 00:11:59.066 }, 00:11:59.066 { 00:11:59.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.066 "dma_device_type": 2 00:11:59.066 }, 00:11:59.066 { 00:11:59.066 "dma_device_id": "system", 00:11:59.066 "dma_device_type": 1 00:11:59.066 }, 00:11:59.066 { 00:11:59.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.066 "dma_device_type": 2 00:11:59.066 }, 00:11:59.066 { 00:11:59.066 "dma_device_id": "system", 00:11:59.066 "dma_device_type": 1 00:11:59.066 }, 00:11:59.066 { 00:11:59.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.066 "dma_device_type": 2 00:11:59.066 }, 00:11:59.066 { 00:11:59.066 "dma_device_id": "system", 00:11:59.066 "dma_device_type": 1 00:11:59.066 }, 00:11:59.066 { 00:11:59.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.066 "dma_device_type": 2 00:11:59.066 } 00:11:59.066 ], 00:11:59.066 "driver_specific": { 00:11:59.066 "raid": { 00:11:59.066 "uuid": "4f1dec70-f427-4320-91f5-e1a89e6956c3", 00:11:59.066 "strip_size_kb": 0, 00:11:59.066 "state": "online", 00:11:59.066 "raid_level": "raid1", 00:11:59.066 "superblock": false, 00:11:59.066 "num_base_bdevs": 4, 00:11:59.066 "num_base_bdevs_discovered": 4, 00:11:59.066 "num_base_bdevs_operational": 4, 00:11:59.066 "base_bdevs_list": [ 00:11:59.066 { 00:11:59.066 "name": "NewBaseBdev", 00:11:59.066 "uuid": "cb1abe3e-e27c-4338-b952-aba825128460", 00:11:59.066 "is_configured": true, 00:11:59.066 "data_offset": 0, 00:11:59.066 "data_size": 65536 00:11:59.066 }, 00:11:59.066 { 00:11:59.066 "name": "BaseBdev2", 00:11:59.066 "uuid": "0bbe70b9-fdf1-483c-bd0f-6d8ea86dbc88", 00:11:59.066 "is_configured": true, 00:11:59.066 "data_offset": 0, 00:11:59.066 "data_size": 65536 00:11:59.066 }, 00:11:59.066 { 00:11:59.066 "name": "BaseBdev3", 00:11:59.066 "uuid": "591c6419-c61f-462b-91a3-532a6ac681a4", 00:11:59.066 "is_configured": true, 00:11:59.066 "data_offset": 0, 00:11:59.066 "data_size": 65536 00:11:59.066 }, 00:11:59.066 { 00:11:59.066 "name": "BaseBdev4", 00:11:59.066 "uuid": "616e0cea-64a1-4b35-a739-f8d3a1f8de8a", 00:11:59.066 "is_configured": true, 00:11:59.066 "data_offset": 0, 00:11:59.066 "data_size": 65536 00:11:59.066 } 00:11:59.066 ] 00:11:59.066 } 00:11:59.066 } 00:11:59.066 }' 00:11:59.066 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.066 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:59.066 BaseBdev2 00:11:59.066 BaseBdev3 00:11:59.066 BaseBdev4' 00:11:59.066 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.066 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.066 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.066 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:59.066 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.066 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.066 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.066 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.323 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.323 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.323 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.323 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:59.323 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.323 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.323 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.323 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.323 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.323 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.323 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.323 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.324 [2024-11-15 10:40:20.402017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:59.324 [2024-11-15 10:40:20.402168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.324 [2024-11-15 10:40:20.402297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.324 [2024-11-15 10:40:20.402675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.324 [2024-11-15 10:40:20.402698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73288 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73288 ']' 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73288 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73288 00:11:59.324 killing process with pid 73288 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73288' 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73288 00:11:59.324 [2024-11-15 10:40:20.440971] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.324 10:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73288 00:11:59.889 [2024-11-15 10:40:20.794122] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:00.824 00:12:00.824 real 0m12.659s 00:12:00.824 user 0m21.068s 00:12:00.824 sys 0m1.695s 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.824 ************************************ 00:12:00.824 END TEST raid_state_function_test 00:12:00.824 ************************************ 00:12:00.824 10:40:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:00.824 10:40:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:00.824 10:40:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.824 10:40:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.824 ************************************ 00:12:00.824 START TEST raid_state_function_test_sb 00:12:00.824 ************************************ 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.824 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73976 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:00.825 Process raid pid: 73976 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73976' 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73976 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73976 ']' 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.825 10:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.106 [2024-11-15 10:40:22.010226] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:12:01.106 [2024-11-15 10:40:22.010444] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.106 [2024-11-15 10:40:22.215554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.377 [2024-11-15 10:40:22.374374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.636 [2024-11-15 10:40:22.591899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.636 [2024-11-15 10:40:22.591962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.894 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.894 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:01.894 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.894 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.894 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.894 [2024-11-15 10:40:23.010018] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.895 [2024-11-15 10:40:23.010085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.895 [2024-11-15 10:40:23.010102] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.895 [2024-11-15 10:40:23.010119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.895 [2024-11-15 10:40:23.010129] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.895 [2024-11-15 10:40:23.010144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.895 [2024-11-15 10:40:23.010154] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:01.895 [2024-11-15 10:40:23.010168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.895 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.153 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.153 "name": "Existed_Raid", 00:12:02.153 "uuid": "ef9a301e-ac4d-46ce-a3c0-6ebeea346fc3", 00:12:02.153 "strip_size_kb": 0, 00:12:02.153 "state": "configuring", 00:12:02.153 "raid_level": "raid1", 00:12:02.153 "superblock": true, 00:12:02.153 "num_base_bdevs": 4, 00:12:02.153 "num_base_bdevs_discovered": 0, 00:12:02.153 "num_base_bdevs_operational": 4, 00:12:02.153 "base_bdevs_list": [ 00:12:02.153 { 00:12:02.153 "name": "BaseBdev1", 00:12:02.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.153 "is_configured": false, 00:12:02.153 "data_offset": 0, 00:12:02.153 "data_size": 0 00:12:02.153 }, 00:12:02.153 { 00:12:02.153 "name": "BaseBdev2", 00:12:02.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.153 "is_configured": false, 00:12:02.153 "data_offset": 0, 00:12:02.153 "data_size": 0 00:12:02.153 }, 00:12:02.153 { 00:12:02.153 "name": "BaseBdev3", 00:12:02.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.153 "is_configured": false, 00:12:02.153 "data_offset": 0, 00:12:02.153 "data_size": 0 00:12:02.153 }, 00:12:02.153 { 00:12:02.153 "name": "BaseBdev4", 00:12:02.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.153 "is_configured": false, 00:12:02.153 "data_offset": 0, 00:12:02.153 "data_size": 0 00:12:02.153 } 00:12:02.153 ] 00:12:02.153 }' 00:12:02.153 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.153 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.411 [2024-11-15 10:40:23.490088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.411 [2024-11-15 10:40:23.490135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.411 [2024-11-15 10:40:23.502064] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.411 [2024-11-15 10:40:23.502258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.411 [2024-11-15 10:40:23.502383] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.411 [2024-11-15 10:40:23.502445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.411 [2024-11-15 10:40:23.502576] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.411 [2024-11-15 10:40:23.502611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.411 [2024-11-15 10:40:23.502623] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:02.411 [2024-11-15 10:40:23.502639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.411 [2024-11-15 10:40:23.546675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.411 BaseBdev1 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.411 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.412 [ 00:12:02.412 { 00:12:02.412 "name": "BaseBdev1", 00:12:02.412 "aliases": [ 00:12:02.412 "a0f734df-b2ad-4ad0-a61e-bbfff24b8429" 00:12:02.412 ], 00:12:02.412 "product_name": "Malloc disk", 00:12:02.412 "block_size": 512, 00:12:02.412 "num_blocks": 65536, 00:12:02.412 "uuid": "a0f734df-b2ad-4ad0-a61e-bbfff24b8429", 00:12:02.412 "assigned_rate_limits": { 00:12:02.412 "rw_ios_per_sec": 0, 00:12:02.412 "rw_mbytes_per_sec": 0, 00:12:02.670 "r_mbytes_per_sec": 0, 00:12:02.670 "w_mbytes_per_sec": 0 00:12:02.670 }, 00:12:02.670 "claimed": true, 00:12:02.670 "claim_type": "exclusive_write", 00:12:02.670 "zoned": false, 00:12:02.670 "supported_io_types": { 00:12:02.670 "read": true, 00:12:02.670 "write": true, 00:12:02.670 "unmap": true, 00:12:02.670 "flush": true, 00:12:02.670 "reset": true, 00:12:02.670 "nvme_admin": false, 00:12:02.670 "nvme_io": false, 00:12:02.670 "nvme_io_md": false, 00:12:02.670 "write_zeroes": true, 00:12:02.670 "zcopy": true, 00:12:02.670 "get_zone_info": false, 00:12:02.670 "zone_management": false, 00:12:02.670 "zone_append": false, 00:12:02.670 "compare": false, 00:12:02.670 "compare_and_write": false, 00:12:02.670 "abort": true, 00:12:02.670 "seek_hole": false, 00:12:02.670 "seek_data": false, 00:12:02.670 "copy": true, 00:12:02.670 "nvme_iov_md": false 00:12:02.670 }, 00:12:02.670 "memory_domains": [ 00:12:02.670 { 00:12:02.670 "dma_device_id": "system", 00:12:02.670 "dma_device_type": 1 00:12:02.670 }, 00:12:02.670 { 00:12:02.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.670 "dma_device_type": 2 00:12:02.670 } 00:12:02.670 ], 00:12:02.670 "driver_specific": {} 00:12:02.670 } 00:12:02.670 ] 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.670 "name": "Existed_Raid", 00:12:02.670 "uuid": "886d56a3-71b4-4377-b211-6600de470c85", 00:12:02.670 "strip_size_kb": 0, 00:12:02.670 "state": "configuring", 00:12:02.670 "raid_level": "raid1", 00:12:02.670 "superblock": true, 00:12:02.670 "num_base_bdevs": 4, 00:12:02.670 "num_base_bdevs_discovered": 1, 00:12:02.670 "num_base_bdevs_operational": 4, 00:12:02.670 "base_bdevs_list": [ 00:12:02.670 { 00:12:02.670 "name": "BaseBdev1", 00:12:02.670 "uuid": "a0f734df-b2ad-4ad0-a61e-bbfff24b8429", 00:12:02.670 "is_configured": true, 00:12:02.670 "data_offset": 2048, 00:12:02.670 "data_size": 63488 00:12:02.670 }, 00:12:02.670 { 00:12:02.670 "name": "BaseBdev2", 00:12:02.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.670 "is_configured": false, 00:12:02.670 "data_offset": 0, 00:12:02.670 "data_size": 0 00:12:02.670 }, 00:12:02.670 { 00:12:02.670 "name": "BaseBdev3", 00:12:02.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.670 "is_configured": false, 00:12:02.670 "data_offset": 0, 00:12:02.670 "data_size": 0 00:12:02.670 }, 00:12:02.670 { 00:12:02.670 "name": "BaseBdev4", 00:12:02.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.670 "is_configured": false, 00:12:02.670 "data_offset": 0, 00:12:02.670 "data_size": 0 00:12:02.670 } 00:12:02.670 ] 00:12:02.670 }' 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.670 10:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.236 [2024-11-15 10:40:24.098862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.236 [2024-11-15 10:40:24.098929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.236 [2024-11-15 10:40:24.106912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.236 [2024-11-15 10:40:24.109422] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.236 [2024-11-15 10:40:24.109478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.236 [2024-11-15 10:40:24.109513] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.236 [2024-11-15 10:40:24.109544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.236 [2024-11-15 10:40:24.109555] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:03.236 [2024-11-15 10:40:24.109569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.236 "name": "Existed_Raid", 00:12:03.236 "uuid": "2cf694a4-6f83-405c-a3ab-4430cbe77c71", 00:12:03.236 "strip_size_kb": 0, 00:12:03.236 "state": "configuring", 00:12:03.236 "raid_level": "raid1", 00:12:03.236 "superblock": true, 00:12:03.236 "num_base_bdevs": 4, 00:12:03.236 "num_base_bdevs_discovered": 1, 00:12:03.236 "num_base_bdevs_operational": 4, 00:12:03.236 "base_bdevs_list": [ 00:12:03.236 { 00:12:03.236 "name": "BaseBdev1", 00:12:03.236 "uuid": "a0f734df-b2ad-4ad0-a61e-bbfff24b8429", 00:12:03.236 "is_configured": true, 00:12:03.236 "data_offset": 2048, 00:12:03.236 "data_size": 63488 00:12:03.236 }, 00:12:03.236 { 00:12:03.236 "name": "BaseBdev2", 00:12:03.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.236 "is_configured": false, 00:12:03.236 "data_offset": 0, 00:12:03.236 "data_size": 0 00:12:03.236 }, 00:12:03.236 { 00:12:03.236 "name": "BaseBdev3", 00:12:03.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.236 "is_configured": false, 00:12:03.236 "data_offset": 0, 00:12:03.236 "data_size": 0 00:12:03.236 }, 00:12:03.236 { 00:12:03.236 "name": "BaseBdev4", 00:12:03.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.236 "is_configured": false, 00:12:03.236 "data_offset": 0, 00:12:03.236 "data_size": 0 00:12:03.236 } 00:12:03.236 ] 00:12:03.236 }' 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.236 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.494 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:03.494 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.494 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.752 [2024-11-15 10:40:24.665723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.752 BaseBdev2 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.752 [ 00:12:03.752 { 00:12:03.752 "name": "BaseBdev2", 00:12:03.752 "aliases": [ 00:12:03.752 "2fb18415-f9c6-46f5-8fdf-231f9d103bdc" 00:12:03.752 ], 00:12:03.752 "product_name": "Malloc disk", 00:12:03.752 "block_size": 512, 00:12:03.752 "num_blocks": 65536, 00:12:03.752 "uuid": "2fb18415-f9c6-46f5-8fdf-231f9d103bdc", 00:12:03.752 "assigned_rate_limits": { 00:12:03.752 "rw_ios_per_sec": 0, 00:12:03.752 "rw_mbytes_per_sec": 0, 00:12:03.752 "r_mbytes_per_sec": 0, 00:12:03.752 "w_mbytes_per_sec": 0 00:12:03.752 }, 00:12:03.752 "claimed": true, 00:12:03.752 "claim_type": "exclusive_write", 00:12:03.752 "zoned": false, 00:12:03.752 "supported_io_types": { 00:12:03.752 "read": true, 00:12:03.752 "write": true, 00:12:03.752 "unmap": true, 00:12:03.752 "flush": true, 00:12:03.752 "reset": true, 00:12:03.752 "nvme_admin": false, 00:12:03.752 "nvme_io": false, 00:12:03.752 "nvme_io_md": false, 00:12:03.752 "write_zeroes": true, 00:12:03.752 "zcopy": true, 00:12:03.752 "get_zone_info": false, 00:12:03.752 "zone_management": false, 00:12:03.752 "zone_append": false, 00:12:03.752 "compare": false, 00:12:03.752 "compare_and_write": false, 00:12:03.752 "abort": true, 00:12:03.752 "seek_hole": false, 00:12:03.752 "seek_data": false, 00:12:03.752 "copy": true, 00:12:03.752 "nvme_iov_md": false 00:12:03.752 }, 00:12:03.752 "memory_domains": [ 00:12:03.752 { 00:12:03.752 "dma_device_id": "system", 00:12:03.752 "dma_device_type": 1 00:12:03.752 }, 00:12:03.752 { 00:12:03.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.752 "dma_device_type": 2 00:12:03.752 } 00:12:03.752 ], 00:12:03.752 "driver_specific": {} 00:12:03.752 } 00:12:03.752 ] 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.752 "name": "Existed_Raid", 00:12:03.752 "uuid": "2cf694a4-6f83-405c-a3ab-4430cbe77c71", 00:12:03.752 "strip_size_kb": 0, 00:12:03.752 "state": "configuring", 00:12:03.752 "raid_level": "raid1", 00:12:03.752 "superblock": true, 00:12:03.752 "num_base_bdevs": 4, 00:12:03.752 "num_base_bdevs_discovered": 2, 00:12:03.752 "num_base_bdevs_operational": 4, 00:12:03.752 "base_bdevs_list": [ 00:12:03.752 { 00:12:03.752 "name": "BaseBdev1", 00:12:03.752 "uuid": "a0f734df-b2ad-4ad0-a61e-bbfff24b8429", 00:12:03.752 "is_configured": true, 00:12:03.752 "data_offset": 2048, 00:12:03.752 "data_size": 63488 00:12:03.752 }, 00:12:03.752 { 00:12:03.752 "name": "BaseBdev2", 00:12:03.752 "uuid": "2fb18415-f9c6-46f5-8fdf-231f9d103bdc", 00:12:03.752 "is_configured": true, 00:12:03.752 "data_offset": 2048, 00:12:03.752 "data_size": 63488 00:12:03.752 }, 00:12:03.752 { 00:12:03.752 "name": "BaseBdev3", 00:12:03.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.752 "is_configured": false, 00:12:03.752 "data_offset": 0, 00:12:03.752 "data_size": 0 00:12:03.752 }, 00:12:03.752 { 00:12:03.752 "name": "BaseBdev4", 00:12:03.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.752 "is_configured": false, 00:12:03.752 "data_offset": 0, 00:12:03.752 "data_size": 0 00:12:03.752 } 00:12:03.752 ] 00:12:03.752 }' 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.752 10:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.319 [2024-11-15 10:40:25.273112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:04.319 BaseBdev3 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.319 [ 00:12:04.319 { 00:12:04.319 "name": "BaseBdev3", 00:12:04.319 "aliases": [ 00:12:04.319 "93de3cdb-de3f-462a-b4a9-6910b5e1e692" 00:12:04.319 ], 00:12:04.319 "product_name": "Malloc disk", 00:12:04.319 "block_size": 512, 00:12:04.319 "num_blocks": 65536, 00:12:04.319 "uuid": "93de3cdb-de3f-462a-b4a9-6910b5e1e692", 00:12:04.319 "assigned_rate_limits": { 00:12:04.319 "rw_ios_per_sec": 0, 00:12:04.319 "rw_mbytes_per_sec": 0, 00:12:04.319 "r_mbytes_per_sec": 0, 00:12:04.319 "w_mbytes_per_sec": 0 00:12:04.319 }, 00:12:04.319 "claimed": true, 00:12:04.319 "claim_type": "exclusive_write", 00:12:04.319 "zoned": false, 00:12:04.319 "supported_io_types": { 00:12:04.319 "read": true, 00:12:04.319 "write": true, 00:12:04.319 "unmap": true, 00:12:04.319 "flush": true, 00:12:04.319 "reset": true, 00:12:04.319 "nvme_admin": false, 00:12:04.319 "nvme_io": false, 00:12:04.319 "nvme_io_md": false, 00:12:04.319 "write_zeroes": true, 00:12:04.319 "zcopy": true, 00:12:04.319 "get_zone_info": false, 00:12:04.319 "zone_management": false, 00:12:04.319 "zone_append": false, 00:12:04.319 "compare": false, 00:12:04.319 "compare_and_write": false, 00:12:04.319 "abort": true, 00:12:04.319 "seek_hole": false, 00:12:04.319 "seek_data": false, 00:12:04.319 "copy": true, 00:12:04.319 "nvme_iov_md": false 00:12:04.319 }, 00:12:04.319 "memory_domains": [ 00:12:04.319 { 00:12:04.319 "dma_device_id": "system", 00:12:04.319 "dma_device_type": 1 00:12:04.319 }, 00:12:04.319 { 00:12:04.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.319 "dma_device_type": 2 00:12:04.319 } 00:12:04.319 ], 00:12:04.319 "driver_specific": {} 00:12:04.319 } 00:12:04.319 ] 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.319 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.320 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.320 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.320 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.320 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.320 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.320 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.320 "name": "Existed_Raid", 00:12:04.320 "uuid": "2cf694a4-6f83-405c-a3ab-4430cbe77c71", 00:12:04.320 "strip_size_kb": 0, 00:12:04.320 "state": "configuring", 00:12:04.320 "raid_level": "raid1", 00:12:04.320 "superblock": true, 00:12:04.320 "num_base_bdevs": 4, 00:12:04.320 "num_base_bdevs_discovered": 3, 00:12:04.320 "num_base_bdevs_operational": 4, 00:12:04.320 "base_bdevs_list": [ 00:12:04.320 { 00:12:04.320 "name": "BaseBdev1", 00:12:04.320 "uuid": "a0f734df-b2ad-4ad0-a61e-bbfff24b8429", 00:12:04.320 "is_configured": true, 00:12:04.320 "data_offset": 2048, 00:12:04.320 "data_size": 63488 00:12:04.320 }, 00:12:04.320 { 00:12:04.320 "name": "BaseBdev2", 00:12:04.320 "uuid": "2fb18415-f9c6-46f5-8fdf-231f9d103bdc", 00:12:04.320 "is_configured": true, 00:12:04.320 "data_offset": 2048, 00:12:04.320 "data_size": 63488 00:12:04.320 }, 00:12:04.320 { 00:12:04.320 "name": "BaseBdev3", 00:12:04.320 "uuid": "93de3cdb-de3f-462a-b4a9-6910b5e1e692", 00:12:04.320 "is_configured": true, 00:12:04.320 "data_offset": 2048, 00:12:04.320 "data_size": 63488 00:12:04.320 }, 00:12:04.320 { 00:12:04.320 "name": "BaseBdev4", 00:12:04.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.320 "is_configured": false, 00:12:04.320 "data_offset": 0, 00:12:04.320 "data_size": 0 00:12:04.320 } 00:12:04.320 ] 00:12:04.320 }' 00:12:04.320 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.320 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.887 [2024-11-15 10:40:25.879706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:04.887 [2024-11-15 10:40:25.880026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:04.887 [2024-11-15 10:40:25.880046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.887 BaseBdev4 00:12:04.887 [2024-11-15 10:40:25.880389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:04.887 [2024-11-15 10:40:25.880623] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:04.887 [2024-11-15 10:40:25.880645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:04.887 [2024-11-15 10:40:25.880835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.887 [ 00:12:04.887 { 00:12:04.887 "name": "BaseBdev4", 00:12:04.887 "aliases": [ 00:12:04.887 "1a689fdd-5cd6-418b-ac07-848731806669" 00:12:04.887 ], 00:12:04.887 "product_name": "Malloc disk", 00:12:04.887 "block_size": 512, 00:12:04.887 "num_blocks": 65536, 00:12:04.887 "uuid": "1a689fdd-5cd6-418b-ac07-848731806669", 00:12:04.887 "assigned_rate_limits": { 00:12:04.887 "rw_ios_per_sec": 0, 00:12:04.887 "rw_mbytes_per_sec": 0, 00:12:04.887 "r_mbytes_per_sec": 0, 00:12:04.887 "w_mbytes_per_sec": 0 00:12:04.887 }, 00:12:04.887 "claimed": true, 00:12:04.887 "claim_type": "exclusive_write", 00:12:04.887 "zoned": false, 00:12:04.887 "supported_io_types": { 00:12:04.887 "read": true, 00:12:04.887 "write": true, 00:12:04.887 "unmap": true, 00:12:04.887 "flush": true, 00:12:04.887 "reset": true, 00:12:04.887 "nvme_admin": false, 00:12:04.887 "nvme_io": false, 00:12:04.887 "nvme_io_md": false, 00:12:04.887 "write_zeroes": true, 00:12:04.887 "zcopy": true, 00:12:04.887 "get_zone_info": false, 00:12:04.887 "zone_management": false, 00:12:04.887 "zone_append": false, 00:12:04.887 "compare": false, 00:12:04.887 "compare_and_write": false, 00:12:04.887 "abort": true, 00:12:04.887 "seek_hole": false, 00:12:04.887 "seek_data": false, 00:12:04.887 "copy": true, 00:12:04.887 "nvme_iov_md": false 00:12:04.887 }, 00:12:04.887 "memory_domains": [ 00:12:04.887 { 00:12:04.887 "dma_device_id": "system", 00:12:04.887 "dma_device_type": 1 00:12:04.887 }, 00:12:04.887 { 00:12:04.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.887 "dma_device_type": 2 00:12:04.887 } 00:12:04.887 ], 00:12:04.887 "driver_specific": {} 00:12:04.887 } 00:12:04.887 ] 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.887 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.887 "name": "Existed_Raid", 00:12:04.887 "uuid": "2cf694a4-6f83-405c-a3ab-4430cbe77c71", 00:12:04.887 "strip_size_kb": 0, 00:12:04.887 "state": "online", 00:12:04.887 "raid_level": "raid1", 00:12:04.887 "superblock": true, 00:12:04.887 "num_base_bdevs": 4, 00:12:04.887 "num_base_bdevs_discovered": 4, 00:12:04.887 "num_base_bdevs_operational": 4, 00:12:04.887 "base_bdevs_list": [ 00:12:04.887 { 00:12:04.887 "name": "BaseBdev1", 00:12:04.887 "uuid": "a0f734df-b2ad-4ad0-a61e-bbfff24b8429", 00:12:04.887 "is_configured": true, 00:12:04.887 "data_offset": 2048, 00:12:04.887 "data_size": 63488 00:12:04.887 }, 00:12:04.887 { 00:12:04.887 "name": "BaseBdev2", 00:12:04.887 "uuid": "2fb18415-f9c6-46f5-8fdf-231f9d103bdc", 00:12:04.887 "is_configured": true, 00:12:04.887 "data_offset": 2048, 00:12:04.887 "data_size": 63488 00:12:04.887 }, 00:12:04.887 { 00:12:04.887 "name": "BaseBdev3", 00:12:04.887 "uuid": "93de3cdb-de3f-462a-b4a9-6910b5e1e692", 00:12:04.887 "is_configured": true, 00:12:04.887 "data_offset": 2048, 00:12:04.888 "data_size": 63488 00:12:04.888 }, 00:12:04.888 { 00:12:04.888 "name": "BaseBdev4", 00:12:04.888 "uuid": "1a689fdd-5cd6-418b-ac07-848731806669", 00:12:04.888 "is_configured": true, 00:12:04.888 "data_offset": 2048, 00:12:04.888 "data_size": 63488 00:12:04.888 } 00:12:04.888 ] 00:12:04.888 }' 00:12:04.888 10:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.888 10:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.456 [2024-11-15 10:40:26.456351] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.456 "name": "Existed_Raid", 00:12:05.456 "aliases": [ 00:12:05.456 "2cf694a4-6f83-405c-a3ab-4430cbe77c71" 00:12:05.456 ], 00:12:05.456 "product_name": "Raid Volume", 00:12:05.456 "block_size": 512, 00:12:05.456 "num_blocks": 63488, 00:12:05.456 "uuid": "2cf694a4-6f83-405c-a3ab-4430cbe77c71", 00:12:05.456 "assigned_rate_limits": { 00:12:05.456 "rw_ios_per_sec": 0, 00:12:05.456 "rw_mbytes_per_sec": 0, 00:12:05.456 "r_mbytes_per_sec": 0, 00:12:05.456 "w_mbytes_per_sec": 0 00:12:05.456 }, 00:12:05.456 "claimed": false, 00:12:05.456 "zoned": false, 00:12:05.456 "supported_io_types": { 00:12:05.456 "read": true, 00:12:05.456 "write": true, 00:12:05.456 "unmap": false, 00:12:05.456 "flush": false, 00:12:05.456 "reset": true, 00:12:05.456 "nvme_admin": false, 00:12:05.456 "nvme_io": false, 00:12:05.456 "nvme_io_md": false, 00:12:05.456 "write_zeroes": true, 00:12:05.456 "zcopy": false, 00:12:05.456 "get_zone_info": false, 00:12:05.456 "zone_management": false, 00:12:05.456 "zone_append": false, 00:12:05.456 "compare": false, 00:12:05.456 "compare_and_write": false, 00:12:05.456 "abort": false, 00:12:05.456 "seek_hole": false, 00:12:05.456 "seek_data": false, 00:12:05.456 "copy": false, 00:12:05.456 "nvme_iov_md": false 00:12:05.456 }, 00:12:05.456 "memory_domains": [ 00:12:05.456 { 00:12:05.456 "dma_device_id": "system", 00:12:05.456 "dma_device_type": 1 00:12:05.456 }, 00:12:05.456 { 00:12:05.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.456 "dma_device_type": 2 00:12:05.456 }, 00:12:05.456 { 00:12:05.456 "dma_device_id": "system", 00:12:05.456 "dma_device_type": 1 00:12:05.456 }, 00:12:05.456 { 00:12:05.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.456 "dma_device_type": 2 00:12:05.456 }, 00:12:05.456 { 00:12:05.456 "dma_device_id": "system", 00:12:05.456 "dma_device_type": 1 00:12:05.456 }, 00:12:05.456 { 00:12:05.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.456 "dma_device_type": 2 00:12:05.456 }, 00:12:05.456 { 00:12:05.456 "dma_device_id": "system", 00:12:05.456 "dma_device_type": 1 00:12:05.456 }, 00:12:05.456 { 00:12:05.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.456 "dma_device_type": 2 00:12:05.456 } 00:12:05.456 ], 00:12:05.456 "driver_specific": { 00:12:05.456 "raid": { 00:12:05.456 "uuid": "2cf694a4-6f83-405c-a3ab-4430cbe77c71", 00:12:05.456 "strip_size_kb": 0, 00:12:05.456 "state": "online", 00:12:05.456 "raid_level": "raid1", 00:12:05.456 "superblock": true, 00:12:05.456 "num_base_bdevs": 4, 00:12:05.456 "num_base_bdevs_discovered": 4, 00:12:05.456 "num_base_bdevs_operational": 4, 00:12:05.456 "base_bdevs_list": [ 00:12:05.456 { 00:12:05.456 "name": "BaseBdev1", 00:12:05.456 "uuid": "a0f734df-b2ad-4ad0-a61e-bbfff24b8429", 00:12:05.456 "is_configured": true, 00:12:05.456 "data_offset": 2048, 00:12:05.456 "data_size": 63488 00:12:05.456 }, 00:12:05.456 { 00:12:05.456 "name": "BaseBdev2", 00:12:05.456 "uuid": "2fb18415-f9c6-46f5-8fdf-231f9d103bdc", 00:12:05.456 "is_configured": true, 00:12:05.456 "data_offset": 2048, 00:12:05.456 "data_size": 63488 00:12:05.456 }, 00:12:05.456 { 00:12:05.456 "name": "BaseBdev3", 00:12:05.456 "uuid": "93de3cdb-de3f-462a-b4a9-6910b5e1e692", 00:12:05.456 "is_configured": true, 00:12:05.456 "data_offset": 2048, 00:12:05.456 "data_size": 63488 00:12:05.456 }, 00:12:05.456 { 00:12:05.456 "name": "BaseBdev4", 00:12:05.456 "uuid": "1a689fdd-5cd6-418b-ac07-848731806669", 00:12:05.456 "is_configured": true, 00:12:05.456 "data_offset": 2048, 00:12:05.456 "data_size": 63488 00:12:05.456 } 00:12:05.456 ] 00:12:05.456 } 00:12:05.456 } 00:12:05.456 }' 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:05.456 BaseBdev2 00:12:05.456 BaseBdev3 00:12:05.456 BaseBdev4' 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:05.456 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.457 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.457 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.716 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.716 [2024-11-15 10:40:26.808165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.975 "name": "Existed_Raid", 00:12:05.975 "uuid": "2cf694a4-6f83-405c-a3ab-4430cbe77c71", 00:12:05.975 "strip_size_kb": 0, 00:12:05.975 "state": "online", 00:12:05.975 "raid_level": "raid1", 00:12:05.975 "superblock": true, 00:12:05.975 "num_base_bdevs": 4, 00:12:05.975 "num_base_bdevs_discovered": 3, 00:12:05.975 "num_base_bdevs_operational": 3, 00:12:05.975 "base_bdevs_list": [ 00:12:05.975 { 00:12:05.975 "name": null, 00:12:05.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.975 "is_configured": false, 00:12:05.975 "data_offset": 0, 00:12:05.975 "data_size": 63488 00:12:05.975 }, 00:12:05.975 { 00:12:05.975 "name": "BaseBdev2", 00:12:05.975 "uuid": "2fb18415-f9c6-46f5-8fdf-231f9d103bdc", 00:12:05.975 "is_configured": true, 00:12:05.975 "data_offset": 2048, 00:12:05.975 "data_size": 63488 00:12:05.975 }, 00:12:05.975 { 00:12:05.975 "name": "BaseBdev3", 00:12:05.975 "uuid": "93de3cdb-de3f-462a-b4a9-6910b5e1e692", 00:12:05.975 "is_configured": true, 00:12:05.975 "data_offset": 2048, 00:12:05.975 "data_size": 63488 00:12:05.975 }, 00:12:05.975 { 00:12:05.975 "name": "BaseBdev4", 00:12:05.975 "uuid": "1a689fdd-5cd6-418b-ac07-848731806669", 00:12:05.975 "is_configured": true, 00:12:05.975 "data_offset": 2048, 00:12:05.975 "data_size": 63488 00:12:05.975 } 00:12:05.975 ] 00:12:05.975 }' 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.975 10:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.319 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:06.319 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.319 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.319 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.319 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.319 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.319 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.592 [2024-11-15 10:40:27.465742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.592 [2024-11-15 10:40:27.612937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.592 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.850 [2024-11-15 10:40:27.761965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:06.850 [2024-11-15 10:40:27.762230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.850 [2024-11-15 10:40:27.848881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.850 [2024-11-15 10:40:27.849172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.850 [2024-11-15 10:40:27.849207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.850 BaseBdev2 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.850 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.850 [ 00:12:06.850 { 00:12:06.851 "name": "BaseBdev2", 00:12:06.851 "aliases": [ 00:12:06.851 "b37ad03c-3e06-4d13-9a33-ae450f9bf70b" 00:12:06.851 ], 00:12:06.851 "product_name": "Malloc disk", 00:12:06.851 "block_size": 512, 00:12:06.851 "num_blocks": 65536, 00:12:06.851 "uuid": "b37ad03c-3e06-4d13-9a33-ae450f9bf70b", 00:12:06.851 "assigned_rate_limits": { 00:12:06.851 "rw_ios_per_sec": 0, 00:12:06.851 "rw_mbytes_per_sec": 0, 00:12:06.851 "r_mbytes_per_sec": 0, 00:12:06.851 "w_mbytes_per_sec": 0 00:12:06.851 }, 00:12:06.851 "claimed": false, 00:12:06.851 "zoned": false, 00:12:06.851 "supported_io_types": { 00:12:06.851 "read": true, 00:12:06.851 "write": true, 00:12:06.851 "unmap": true, 00:12:06.851 "flush": true, 00:12:06.851 "reset": true, 00:12:06.851 "nvme_admin": false, 00:12:06.851 "nvme_io": false, 00:12:06.851 "nvme_io_md": false, 00:12:06.851 "write_zeroes": true, 00:12:06.851 "zcopy": true, 00:12:06.851 "get_zone_info": false, 00:12:06.851 "zone_management": false, 00:12:06.851 "zone_append": false, 00:12:06.851 "compare": false, 00:12:06.851 "compare_and_write": false, 00:12:06.851 "abort": true, 00:12:06.851 "seek_hole": false, 00:12:06.851 "seek_data": false, 00:12:06.851 "copy": true, 00:12:06.851 "nvme_iov_md": false 00:12:06.851 }, 00:12:06.851 "memory_domains": [ 00:12:06.851 { 00:12:06.851 "dma_device_id": "system", 00:12:06.851 "dma_device_type": 1 00:12:06.851 }, 00:12:06.851 { 00:12:06.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.851 "dma_device_type": 2 00:12:06.851 } 00:12:06.851 ], 00:12:06.851 "driver_specific": {} 00:12:06.851 } 00:12:06.851 ] 00:12:06.851 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.851 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:06.851 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.851 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.851 10:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:06.851 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.851 10:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.110 BaseBdev3 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.110 [ 00:12:07.110 { 00:12:07.110 "name": "BaseBdev3", 00:12:07.110 "aliases": [ 00:12:07.110 "a6c2878c-5edb-4561-9a14-1bf8e24a6e0b" 00:12:07.110 ], 00:12:07.110 "product_name": "Malloc disk", 00:12:07.110 "block_size": 512, 00:12:07.110 "num_blocks": 65536, 00:12:07.110 "uuid": "a6c2878c-5edb-4561-9a14-1bf8e24a6e0b", 00:12:07.110 "assigned_rate_limits": { 00:12:07.110 "rw_ios_per_sec": 0, 00:12:07.110 "rw_mbytes_per_sec": 0, 00:12:07.110 "r_mbytes_per_sec": 0, 00:12:07.110 "w_mbytes_per_sec": 0 00:12:07.110 }, 00:12:07.110 "claimed": false, 00:12:07.110 "zoned": false, 00:12:07.110 "supported_io_types": { 00:12:07.110 "read": true, 00:12:07.110 "write": true, 00:12:07.110 "unmap": true, 00:12:07.110 "flush": true, 00:12:07.110 "reset": true, 00:12:07.110 "nvme_admin": false, 00:12:07.110 "nvme_io": false, 00:12:07.110 "nvme_io_md": false, 00:12:07.110 "write_zeroes": true, 00:12:07.110 "zcopy": true, 00:12:07.110 "get_zone_info": false, 00:12:07.110 "zone_management": false, 00:12:07.110 "zone_append": false, 00:12:07.110 "compare": false, 00:12:07.110 "compare_and_write": false, 00:12:07.110 "abort": true, 00:12:07.110 "seek_hole": false, 00:12:07.110 "seek_data": false, 00:12:07.110 "copy": true, 00:12:07.110 "nvme_iov_md": false 00:12:07.110 }, 00:12:07.110 "memory_domains": [ 00:12:07.110 { 00:12:07.110 "dma_device_id": "system", 00:12:07.110 "dma_device_type": 1 00:12:07.110 }, 00:12:07.110 { 00:12:07.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.110 "dma_device_type": 2 00:12:07.110 } 00:12:07.110 ], 00:12:07.110 "driver_specific": {} 00:12:07.110 } 00:12:07.110 ] 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.110 BaseBdev4 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.110 [ 00:12:07.110 { 00:12:07.110 "name": "BaseBdev4", 00:12:07.110 "aliases": [ 00:12:07.110 "9c1ae0a7-6970-41b2-b16e-a5814f9077ab" 00:12:07.110 ], 00:12:07.110 "product_name": "Malloc disk", 00:12:07.110 "block_size": 512, 00:12:07.110 "num_blocks": 65536, 00:12:07.110 "uuid": "9c1ae0a7-6970-41b2-b16e-a5814f9077ab", 00:12:07.110 "assigned_rate_limits": { 00:12:07.110 "rw_ios_per_sec": 0, 00:12:07.110 "rw_mbytes_per_sec": 0, 00:12:07.110 "r_mbytes_per_sec": 0, 00:12:07.110 "w_mbytes_per_sec": 0 00:12:07.110 }, 00:12:07.110 "claimed": false, 00:12:07.110 "zoned": false, 00:12:07.110 "supported_io_types": { 00:12:07.110 "read": true, 00:12:07.110 "write": true, 00:12:07.110 "unmap": true, 00:12:07.110 "flush": true, 00:12:07.110 "reset": true, 00:12:07.110 "nvme_admin": false, 00:12:07.110 "nvme_io": false, 00:12:07.110 "nvme_io_md": false, 00:12:07.110 "write_zeroes": true, 00:12:07.110 "zcopy": true, 00:12:07.110 "get_zone_info": false, 00:12:07.110 "zone_management": false, 00:12:07.110 "zone_append": false, 00:12:07.110 "compare": false, 00:12:07.110 "compare_and_write": false, 00:12:07.110 "abort": true, 00:12:07.110 "seek_hole": false, 00:12:07.110 "seek_data": false, 00:12:07.110 "copy": true, 00:12:07.110 "nvme_iov_md": false 00:12:07.110 }, 00:12:07.110 "memory_domains": [ 00:12:07.110 { 00:12:07.110 "dma_device_id": "system", 00:12:07.110 "dma_device_type": 1 00:12:07.110 }, 00:12:07.110 { 00:12:07.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.110 "dma_device_type": 2 00:12:07.110 } 00:12:07.110 ], 00:12:07.110 "driver_specific": {} 00:12:07.110 } 00:12:07.110 ] 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:07.110 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.111 [2024-11-15 10:40:28.133207] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:07.111 [2024-11-15 10:40:28.133390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:07.111 [2024-11-15 10:40:28.133527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.111 [2024-11-15 10:40:28.135973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.111 [2024-11-15 10:40:28.136153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.111 "name": "Existed_Raid", 00:12:07.111 "uuid": "3aeff82c-9d82-4f53-94d1-803f6c6a12fa", 00:12:07.111 "strip_size_kb": 0, 00:12:07.111 "state": "configuring", 00:12:07.111 "raid_level": "raid1", 00:12:07.111 "superblock": true, 00:12:07.111 "num_base_bdevs": 4, 00:12:07.111 "num_base_bdevs_discovered": 3, 00:12:07.111 "num_base_bdevs_operational": 4, 00:12:07.111 "base_bdevs_list": [ 00:12:07.111 { 00:12:07.111 "name": "BaseBdev1", 00:12:07.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.111 "is_configured": false, 00:12:07.111 "data_offset": 0, 00:12:07.111 "data_size": 0 00:12:07.111 }, 00:12:07.111 { 00:12:07.111 "name": "BaseBdev2", 00:12:07.111 "uuid": "b37ad03c-3e06-4d13-9a33-ae450f9bf70b", 00:12:07.111 "is_configured": true, 00:12:07.111 "data_offset": 2048, 00:12:07.111 "data_size": 63488 00:12:07.111 }, 00:12:07.111 { 00:12:07.111 "name": "BaseBdev3", 00:12:07.111 "uuid": "a6c2878c-5edb-4561-9a14-1bf8e24a6e0b", 00:12:07.111 "is_configured": true, 00:12:07.111 "data_offset": 2048, 00:12:07.111 "data_size": 63488 00:12:07.111 }, 00:12:07.111 { 00:12:07.111 "name": "BaseBdev4", 00:12:07.111 "uuid": "9c1ae0a7-6970-41b2-b16e-a5814f9077ab", 00:12:07.111 "is_configured": true, 00:12:07.111 "data_offset": 2048, 00:12:07.111 "data_size": 63488 00:12:07.111 } 00:12:07.111 ] 00:12:07.111 }' 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.111 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.678 [2024-11-15 10:40:28.665364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.678 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.678 "name": "Existed_Raid", 00:12:07.678 "uuid": "3aeff82c-9d82-4f53-94d1-803f6c6a12fa", 00:12:07.678 "strip_size_kb": 0, 00:12:07.678 "state": "configuring", 00:12:07.678 "raid_level": "raid1", 00:12:07.678 "superblock": true, 00:12:07.678 "num_base_bdevs": 4, 00:12:07.678 "num_base_bdevs_discovered": 2, 00:12:07.678 "num_base_bdevs_operational": 4, 00:12:07.678 "base_bdevs_list": [ 00:12:07.678 { 00:12:07.678 "name": "BaseBdev1", 00:12:07.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.679 "is_configured": false, 00:12:07.679 "data_offset": 0, 00:12:07.679 "data_size": 0 00:12:07.679 }, 00:12:07.679 { 00:12:07.679 "name": null, 00:12:07.679 "uuid": "b37ad03c-3e06-4d13-9a33-ae450f9bf70b", 00:12:07.679 "is_configured": false, 00:12:07.679 "data_offset": 0, 00:12:07.679 "data_size": 63488 00:12:07.679 }, 00:12:07.679 { 00:12:07.679 "name": "BaseBdev3", 00:12:07.679 "uuid": "a6c2878c-5edb-4561-9a14-1bf8e24a6e0b", 00:12:07.679 "is_configured": true, 00:12:07.679 "data_offset": 2048, 00:12:07.679 "data_size": 63488 00:12:07.679 }, 00:12:07.679 { 00:12:07.679 "name": "BaseBdev4", 00:12:07.679 "uuid": "9c1ae0a7-6970-41b2-b16e-a5814f9077ab", 00:12:07.679 "is_configured": true, 00:12:07.679 "data_offset": 2048, 00:12:07.679 "data_size": 63488 00:12:07.679 } 00:12:07.679 ] 00:12:07.679 }' 00:12:07.679 10:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.679 10:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.246 [2024-11-15 10:40:29.275197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.246 BaseBdev1 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.246 [ 00:12:08.246 { 00:12:08.246 "name": "BaseBdev1", 00:12:08.246 "aliases": [ 00:12:08.246 "1a63b040-67b7-4587-9f9e-31de2dd906df" 00:12:08.246 ], 00:12:08.246 "product_name": "Malloc disk", 00:12:08.246 "block_size": 512, 00:12:08.246 "num_blocks": 65536, 00:12:08.246 "uuid": "1a63b040-67b7-4587-9f9e-31de2dd906df", 00:12:08.246 "assigned_rate_limits": { 00:12:08.246 "rw_ios_per_sec": 0, 00:12:08.246 "rw_mbytes_per_sec": 0, 00:12:08.246 "r_mbytes_per_sec": 0, 00:12:08.246 "w_mbytes_per_sec": 0 00:12:08.246 }, 00:12:08.246 "claimed": true, 00:12:08.246 "claim_type": "exclusive_write", 00:12:08.246 "zoned": false, 00:12:08.246 "supported_io_types": { 00:12:08.246 "read": true, 00:12:08.246 "write": true, 00:12:08.246 "unmap": true, 00:12:08.246 "flush": true, 00:12:08.246 "reset": true, 00:12:08.246 "nvme_admin": false, 00:12:08.246 "nvme_io": false, 00:12:08.246 "nvme_io_md": false, 00:12:08.246 "write_zeroes": true, 00:12:08.246 "zcopy": true, 00:12:08.246 "get_zone_info": false, 00:12:08.246 "zone_management": false, 00:12:08.246 "zone_append": false, 00:12:08.246 "compare": false, 00:12:08.246 "compare_and_write": false, 00:12:08.246 "abort": true, 00:12:08.246 "seek_hole": false, 00:12:08.246 "seek_data": false, 00:12:08.246 "copy": true, 00:12:08.246 "nvme_iov_md": false 00:12:08.246 }, 00:12:08.246 "memory_domains": [ 00:12:08.246 { 00:12:08.246 "dma_device_id": "system", 00:12:08.246 "dma_device_type": 1 00:12:08.246 }, 00:12:08.246 { 00:12:08.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.246 "dma_device_type": 2 00:12:08.246 } 00:12:08.246 ], 00:12:08.246 "driver_specific": {} 00:12:08.246 } 00:12:08.246 ] 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.246 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.246 "name": "Existed_Raid", 00:12:08.246 "uuid": "3aeff82c-9d82-4f53-94d1-803f6c6a12fa", 00:12:08.246 "strip_size_kb": 0, 00:12:08.246 "state": "configuring", 00:12:08.246 "raid_level": "raid1", 00:12:08.246 "superblock": true, 00:12:08.246 "num_base_bdevs": 4, 00:12:08.246 "num_base_bdevs_discovered": 3, 00:12:08.246 "num_base_bdevs_operational": 4, 00:12:08.246 "base_bdevs_list": [ 00:12:08.246 { 00:12:08.246 "name": "BaseBdev1", 00:12:08.246 "uuid": "1a63b040-67b7-4587-9f9e-31de2dd906df", 00:12:08.246 "is_configured": true, 00:12:08.246 "data_offset": 2048, 00:12:08.246 "data_size": 63488 00:12:08.246 }, 00:12:08.246 { 00:12:08.246 "name": null, 00:12:08.246 "uuid": "b37ad03c-3e06-4d13-9a33-ae450f9bf70b", 00:12:08.246 "is_configured": false, 00:12:08.246 "data_offset": 0, 00:12:08.246 "data_size": 63488 00:12:08.246 }, 00:12:08.246 { 00:12:08.246 "name": "BaseBdev3", 00:12:08.246 "uuid": "a6c2878c-5edb-4561-9a14-1bf8e24a6e0b", 00:12:08.246 "is_configured": true, 00:12:08.246 "data_offset": 2048, 00:12:08.246 "data_size": 63488 00:12:08.246 }, 00:12:08.246 { 00:12:08.246 "name": "BaseBdev4", 00:12:08.246 "uuid": "9c1ae0a7-6970-41b2-b16e-a5814f9077ab", 00:12:08.246 "is_configured": true, 00:12:08.246 "data_offset": 2048, 00:12:08.246 "data_size": 63488 00:12:08.247 } 00:12:08.247 ] 00:12:08.247 }' 00:12:08.247 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.247 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.814 [2024-11-15 10:40:29.871428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.814 "name": "Existed_Raid", 00:12:08.814 "uuid": "3aeff82c-9d82-4f53-94d1-803f6c6a12fa", 00:12:08.814 "strip_size_kb": 0, 00:12:08.814 "state": "configuring", 00:12:08.814 "raid_level": "raid1", 00:12:08.814 "superblock": true, 00:12:08.814 "num_base_bdevs": 4, 00:12:08.814 "num_base_bdevs_discovered": 2, 00:12:08.814 "num_base_bdevs_operational": 4, 00:12:08.814 "base_bdevs_list": [ 00:12:08.814 { 00:12:08.814 "name": "BaseBdev1", 00:12:08.814 "uuid": "1a63b040-67b7-4587-9f9e-31de2dd906df", 00:12:08.814 "is_configured": true, 00:12:08.814 "data_offset": 2048, 00:12:08.814 "data_size": 63488 00:12:08.814 }, 00:12:08.814 { 00:12:08.814 "name": null, 00:12:08.814 "uuid": "b37ad03c-3e06-4d13-9a33-ae450f9bf70b", 00:12:08.814 "is_configured": false, 00:12:08.814 "data_offset": 0, 00:12:08.814 "data_size": 63488 00:12:08.814 }, 00:12:08.814 { 00:12:08.814 "name": null, 00:12:08.814 "uuid": "a6c2878c-5edb-4561-9a14-1bf8e24a6e0b", 00:12:08.814 "is_configured": false, 00:12:08.814 "data_offset": 0, 00:12:08.814 "data_size": 63488 00:12:08.814 }, 00:12:08.814 { 00:12:08.814 "name": "BaseBdev4", 00:12:08.814 "uuid": "9c1ae0a7-6970-41b2-b16e-a5814f9077ab", 00:12:08.814 "is_configured": true, 00:12:08.814 "data_offset": 2048, 00:12:08.814 "data_size": 63488 00:12:08.814 } 00:12:08.814 ] 00:12:08.814 }' 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.814 10:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.382 [2024-11-15 10:40:30.419564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.382 "name": "Existed_Raid", 00:12:09.382 "uuid": "3aeff82c-9d82-4f53-94d1-803f6c6a12fa", 00:12:09.382 "strip_size_kb": 0, 00:12:09.382 "state": "configuring", 00:12:09.382 "raid_level": "raid1", 00:12:09.382 "superblock": true, 00:12:09.382 "num_base_bdevs": 4, 00:12:09.382 "num_base_bdevs_discovered": 3, 00:12:09.382 "num_base_bdevs_operational": 4, 00:12:09.382 "base_bdevs_list": [ 00:12:09.382 { 00:12:09.382 "name": "BaseBdev1", 00:12:09.382 "uuid": "1a63b040-67b7-4587-9f9e-31de2dd906df", 00:12:09.382 "is_configured": true, 00:12:09.382 "data_offset": 2048, 00:12:09.382 "data_size": 63488 00:12:09.382 }, 00:12:09.382 { 00:12:09.382 "name": null, 00:12:09.382 "uuid": "b37ad03c-3e06-4d13-9a33-ae450f9bf70b", 00:12:09.382 "is_configured": false, 00:12:09.382 "data_offset": 0, 00:12:09.382 "data_size": 63488 00:12:09.382 }, 00:12:09.382 { 00:12:09.382 "name": "BaseBdev3", 00:12:09.382 "uuid": "a6c2878c-5edb-4561-9a14-1bf8e24a6e0b", 00:12:09.382 "is_configured": true, 00:12:09.382 "data_offset": 2048, 00:12:09.382 "data_size": 63488 00:12:09.382 }, 00:12:09.382 { 00:12:09.382 "name": "BaseBdev4", 00:12:09.382 "uuid": "9c1ae0a7-6970-41b2-b16e-a5814f9077ab", 00:12:09.382 "is_configured": true, 00:12:09.382 "data_offset": 2048, 00:12:09.382 "data_size": 63488 00:12:09.382 } 00:12:09.382 ] 00:12:09.382 }' 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.382 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.950 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:09.950 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.950 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.950 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.950 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.950 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:09.950 10:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:09.950 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.950 10:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.950 [2024-11-15 10:40:30.943750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.950 "name": "Existed_Raid", 00:12:09.950 "uuid": "3aeff82c-9d82-4f53-94d1-803f6c6a12fa", 00:12:09.950 "strip_size_kb": 0, 00:12:09.950 "state": "configuring", 00:12:09.950 "raid_level": "raid1", 00:12:09.950 "superblock": true, 00:12:09.950 "num_base_bdevs": 4, 00:12:09.950 "num_base_bdevs_discovered": 2, 00:12:09.950 "num_base_bdevs_operational": 4, 00:12:09.950 "base_bdevs_list": [ 00:12:09.950 { 00:12:09.950 "name": null, 00:12:09.950 "uuid": "1a63b040-67b7-4587-9f9e-31de2dd906df", 00:12:09.950 "is_configured": false, 00:12:09.950 "data_offset": 0, 00:12:09.950 "data_size": 63488 00:12:09.950 }, 00:12:09.950 { 00:12:09.950 "name": null, 00:12:09.950 "uuid": "b37ad03c-3e06-4d13-9a33-ae450f9bf70b", 00:12:09.950 "is_configured": false, 00:12:09.950 "data_offset": 0, 00:12:09.950 "data_size": 63488 00:12:09.950 }, 00:12:09.950 { 00:12:09.950 "name": "BaseBdev3", 00:12:09.950 "uuid": "a6c2878c-5edb-4561-9a14-1bf8e24a6e0b", 00:12:09.950 "is_configured": true, 00:12:09.950 "data_offset": 2048, 00:12:09.950 "data_size": 63488 00:12:09.950 }, 00:12:09.950 { 00:12:09.950 "name": "BaseBdev4", 00:12:09.950 "uuid": "9c1ae0a7-6970-41b2-b16e-a5814f9077ab", 00:12:09.950 "is_configured": true, 00:12:09.950 "data_offset": 2048, 00:12:09.950 "data_size": 63488 00:12:09.950 } 00:12:09.950 ] 00:12:09.950 }' 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.950 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.517 [2024-11-15 10:40:31.558845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.517 "name": "Existed_Raid", 00:12:10.517 "uuid": "3aeff82c-9d82-4f53-94d1-803f6c6a12fa", 00:12:10.517 "strip_size_kb": 0, 00:12:10.517 "state": "configuring", 00:12:10.517 "raid_level": "raid1", 00:12:10.517 "superblock": true, 00:12:10.517 "num_base_bdevs": 4, 00:12:10.517 "num_base_bdevs_discovered": 3, 00:12:10.517 "num_base_bdevs_operational": 4, 00:12:10.517 "base_bdevs_list": [ 00:12:10.517 { 00:12:10.517 "name": null, 00:12:10.517 "uuid": "1a63b040-67b7-4587-9f9e-31de2dd906df", 00:12:10.517 "is_configured": false, 00:12:10.517 "data_offset": 0, 00:12:10.517 "data_size": 63488 00:12:10.517 }, 00:12:10.517 { 00:12:10.517 "name": "BaseBdev2", 00:12:10.517 "uuid": "b37ad03c-3e06-4d13-9a33-ae450f9bf70b", 00:12:10.517 "is_configured": true, 00:12:10.517 "data_offset": 2048, 00:12:10.517 "data_size": 63488 00:12:10.517 }, 00:12:10.517 { 00:12:10.517 "name": "BaseBdev3", 00:12:10.517 "uuid": "a6c2878c-5edb-4561-9a14-1bf8e24a6e0b", 00:12:10.517 "is_configured": true, 00:12:10.517 "data_offset": 2048, 00:12:10.517 "data_size": 63488 00:12:10.517 }, 00:12:10.517 { 00:12:10.517 "name": "BaseBdev4", 00:12:10.517 "uuid": "9c1ae0a7-6970-41b2-b16e-a5814f9077ab", 00:12:10.517 "is_configured": true, 00:12:10.517 "data_offset": 2048, 00:12:10.517 "data_size": 63488 00:12:10.517 } 00:12:10.517 ] 00:12:10.517 }' 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.517 10:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1a63b040-67b7-4587-9f9e-31de2dd906df 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.083 [2024-11-15 10:40:32.188942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:11.083 [2024-11-15 10:40:32.189219] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:11.083 [2024-11-15 10:40:32.189242] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:11.083 [2024-11-15 10:40:32.189580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:11.083 NewBaseBdev 00:12:11.083 [2024-11-15 10:40:32.189801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:11.083 [2024-11-15 10:40:32.189824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:11.083 [2024-11-15 10:40:32.189989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.083 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.084 [ 00:12:11.084 { 00:12:11.084 "name": "NewBaseBdev", 00:12:11.084 "aliases": [ 00:12:11.084 "1a63b040-67b7-4587-9f9e-31de2dd906df" 00:12:11.084 ], 00:12:11.084 "product_name": "Malloc disk", 00:12:11.084 "block_size": 512, 00:12:11.084 "num_blocks": 65536, 00:12:11.084 "uuid": "1a63b040-67b7-4587-9f9e-31de2dd906df", 00:12:11.084 "assigned_rate_limits": { 00:12:11.084 "rw_ios_per_sec": 0, 00:12:11.084 "rw_mbytes_per_sec": 0, 00:12:11.084 "r_mbytes_per_sec": 0, 00:12:11.084 "w_mbytes_per_sec": 0 00:12:11.084 }, 00:12:11.084 "claimed": true, 00:12:11.084 "claim_type": "exclusive_write", 00:12:11.084 "zoned": false, 00:12:11.084 "supported_io_types": { 00:12:11.084 "read": true, 00:12:11.084 "write": true, 00:12:11.084 "unmap": true, 00:12:11.084 "flush": true, 00:12:11.084 "reset": true, 00:12:11.084 "nvme_admin": false, 00:12:11.084 "nvme_io": false, 00:12:11.084 "nvme_io_md": false, 00:12:11.084 "write_zeroes": true, 00:12:11.084 "zcopy": true, 00:12:11.084 "get_zone_info": false, 00:12:11.084 "zone_management": false, 00:12:11.084 "zone_append": false, 00:12:11.084 "compare": false, 00:12:11.084 "compare_and_write": false, 00:12:11.084 "abort": true, 00:12:11.084 "seek_hole": false, 00:12:11.084 "seek_data": false, 00:12:11.084 "copy": true, 00:12:11.084 "nvme_iov_md": false 00:12:11.084 }, 00:12:11.084 "memory_domains": [ 00:12:11.084 { 00:12:11.084 "dma_device_id": "system", 00:12:11.084 "dma_device_type": 1 00:12:11.084 }, 00:12:11.084 { 00:12:11.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.084 "dma_device_type": 2 00:12:11.084 } 00:12:11.084 ], 00:12:11.084 "driver_specific": {} 00:12:11.084 } 00:12:11.084 ] 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.084 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.342 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.342 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.342 "name": "Existed_Raid", 00:12:11.342 "uuid": "3aeff82c-9d82-4f53-94d1-803f6c6a12fa", 00:12:11.342 "strip_size_kb": 0, 00:12:11.342 "state": "online", 00:12:11.342 "raid_level": "raid1", 00:12:11.342 "superblock": true, 00:12:11.342 "num_base_bdevs": 4, 00:12:11.342 "num_base_bdevs_discovered": 4, 00:12:11.342 "num_base_bdevs_operational": 4, 00:12:11.342 "base_bdevs_list": [ 00:12:11.342 { 00:12:11.342 "name": "NewBaseBdev", 00:12:11.342 "uuid": "1a63b040-67b7-4587-9f9e-31de2dd906df", 00:12:11.342 "is_configured": true, 00:12:11.342 "data_offset": 2048, 00:12:11.342 "data_size": 63488 00:12:11.342 }, 00:12:11.342 { 00:12:11.342 "name": "BaseBdev2", 00:12:11.342 "uuid": "b37ad03c-3e06-4d13-9a33-ae450f9bf70b", 00:12:11.342 "is_configured": true, 00:12:11.342 "data_offset": 2048, 00:12:11.342 "data_size": 63488 00:12:11.342 }, 00:12:11.342 { 00:12:11.342 "name": "BaseBdev3", 00:12:11.342 "uuid": "a6c2878c-5edb-4561-9a14-1bf8e24a6e0b", 00:12:11.342 "is_configured": true, 00:12:11.342 "data_offset": 2048, 00:12:11.342 "data_size": 63488 00:12:11.342 }, 00:12:11.342 { 00:12:11.342 "name": "BaseBdev4", 00:12:11.342 "uuid": "9c1ae0a7-6970-41b2-b16e-a5814f9077ab", 00:12:11.342 "is_configured": true, 00:12:11.342 "data_offset": 2048, 00:12:11.342 "data_size": 63488 00:12:11.342 } 00:12:11.342 ] 00:12:11.342 }' 00:12:11.342 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.342 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.632 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:11.632 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:11.632 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:11.632 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:11.632 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:11.632 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:11.632 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:11.632 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:11.632 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.632 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.632 [2024-11-15 10:40:32.757615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:11.912 "name": "Existed_Raid", 00:12:11.912 "aliases": [ 00:12:11.912 "3aeff82c-9d82-4f53-94d1-803f6c6a12fa" 00:12:11.912 ], 00:12:11.912 "product_name": "Raid Volume", 00:12:11.912 "block_size": 512, 00:12:11.912 "num_blocks": 63488, 00:12:11.912 "uuid": "3aeff82c-9d82-4f53-94d1-803f6c6a12fa", 00:12:11.912 "assigned_rate_limits": { 00:12:11.912 "rw_ios_per_sec": 0, 00:12:11.912 "rw_mbytes_per_sec": 0, 00:12:11.912 "r_mbytes_per_sec": 0, 00:12:11.912 "w_mbytes_per_sec": 0 00:12:11.912 }, 00:12:11.912 "claimed": false, 00:12:11.912 "zoned": false, 00:12:11.912 "supported_io_types": { 00:12:11.912 "read": true, 00:12:11.912 "write": true, 00:12:11.912 "unmap": false, 00:12:11.912 "flush": false, 00:12:11.912 "reset": true, 00:12:11.912 "nvme_admin": false, 00:12:11.912 "nvme_io": false, 00:12:11.912 "nvme_io_md": false, 00:12:11.912 "write_zeroes": true, 00:12:11.912 "zcopy": false, 00:12:11.912 "get_zone_info": false, 00:12:11.912 "zone_management": false, 00:12:11.912 "zone_append": false, 00:12:11.912 "compare": false, 00:12:11.912 "compare_and_write": false, 00:12:11.912 "abort": false, 00:12:11.912 "seek_hole": false, 00:12:11.912 "seek_data": false, 00:12:11.912 "copy": false, 00:12:11.912 "nvme_iov_md": false 00:12:11.912 }, 00:12:11.912 "memory_domains": [ 00:12:11.912 { 00:12:11.912 "dma_device_id": "system", 00:12:11.912 "dma_device_type": 1 00:12:11.912 }, 00:12:11.912 { 00:12:11.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.912 "dma_device_type": 2 00:12:11.912 }, 00:12:11.912 { 00:12:11.912 "dma_device_id": "system", 00:12:11.912 "dma_device_type": 1 00:12:11.912 }, 00:12:11.912 { 00:12:11.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.912 "dma_device_type": 2 00:12:11.912 }, 00:12:11.912 { 00:12:11.912 "dma_device_id": "system", 00:12:11.912 "dma_device_type": 1 00:12:11.912 }, 00:12:11.912 { 00:12:11.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.912 "dma_device_type": 2 00:12:11.912 }, 00:12:11.912 { 00:12:11.912 "dma_device_id": "system", 00:12:11.912 "dma_device_type": 1 00:12:11.912 }, 00:12:11.912 { 00:12:11.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.912 "dma_device_type": 2 00:12:11.912 } 00:12:11.912 ], 00:12:11.912 "driver_specific": { 00:12:11.912 "raid": { 00:12:11.912 "uuid": "3aeff82c-9d82-4f53-94d1-803f6c6a12fa", 00:12:11.912 "strip_size_kb": 0, 00:12:11.912 "state": "online", 00:12:11.912 "raid_level": "raid1", 00:12:11.912 "superblock": true, 00:12:11.912 "num_base_bdevs": 4, 00:12:11.912 "num_base_bdevs_discovered": 4, 00:12:11.912 "num_base_bdevs_operational": 4, 00:12:11.912 "base_bdevs_list": [ 00:12:11.912 { 00:12:11.912 "name": "NewBaseBdev", 00:12:11.912 "uuid": "1a63b040-67b7-4587-9f9e-31de2dd906df", 00:12:11.912 "is_configured": true, 00:12:11.912 "data_offset": 2048, 00:12:11.912 "data_size": 63488 00:12:11.912 }, 00:12:11.912 { 00:12:11.912 "name": "BaseBdev2", 00:12:11.912 "uuid": "b37ad03c-3e06-4d13-9a33-ae450f9bf70b", 00:12:11.912 "is_configured": true, 00:12:11.912 "data_offset": 2048, 00:12:11.912 "data_size": 63488 00:12:11.912 }, 00:12:11.912 { 00:12:11.912 "name": "BaseBdev3", 00:12:11.912 "uuid": "a6c2878c-5edb-4561-9a14-1bf8e24a6e0b", 00:12:11.912 "is_configured": true, 00:12:11.912 "data_offset": 2048, 00:12:11.912 "data_size": 63488 00:12:11.912 }, 00:12:11.912 { 00:12:11.912 "name": "BaseBdev4", 00:12:11.912 "uuid": "9c1ae0a7-6970-41b2-b16e-a5814f9077ab", 00:12:11.912 "is_configured": true, 00:12:11.912 "data_offset": 2048, 00:12:11.912 "data_size": 63488 00:12:11.912 } 00:12:11.912 ] 00:12:11.912 } 00:12:11.912 } 00:12:11.912 }' 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:11.912 BaseBdev2 00:12:11.912 BaseBdev3 00:12:11.912 BaseBdev4' 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.912 10:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.912 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.912 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.912 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.912 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.913 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:11.913 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.913 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.913 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.913 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.913 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.913 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.913 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:11.913 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.913 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.913 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.171 [2024-11-15 10:40:33.133252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.171 [2024-11-15 10:40:33.133286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.171 [2024-11-15 10:40:33.133388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.171 [2024-11-15 10:40:33.133767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.171 [2024-11-15 10:40:33.133791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73976 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73976 ']' 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73976 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73976 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.171 killing process with pid 73976 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73976' 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73976 00:12:12.171 [2024-11-15 10:40:33.172410] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.171 10:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73976 00:12:12.429 [2024-11-15 10:40:33.528588] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.806 10:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:13.806 00:12:13.806 real 0m12.664s 00:12:13.806 user 0m21.050s 00:12:13.806 sys 0m1.735s 00:12:13.806 ************************************ 00:12:13.806 END TEST raid_state_function_test_sb 00:12:13.806 ************************************ 00:12:13.806 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.806 10:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.806 10:40:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:13.806 10:40:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:13.806 10:40:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.806 10:40:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.806 ************************************ 00:12:13.806 START TEST raid_superblock_test 00:12:13.806 ************************************ 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74655 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74655 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74655 ']' 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.806 10:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.806 [2024-11-15 10:40:34.706164] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:12:13.806 [2024-11-15 10:40:34.706590] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74655 ] 00:12:13.806 [2024-11-15 10:40:34.890540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.065 [2024-11-15 10:40:35.020998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.324 [2024-11-15 10:40:35.223703] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.324 [2024-11-15 10:40:35.223762] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.583 malloc1 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.583 [2024-11-15 10:40:35.718135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:14.583 [2024-11-15 10:40:35.718213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.583 [2024-11-15 10:40:35.718249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:14.583 [2024-11-15 10:40:35.718264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.583 [2024-11-15 10:40:35.721003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.583 [2024-11-15 10:40:35.721182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:14.583 pt1 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.583 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.843 malloc2 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.843 [2024-11-15 10:40:35.773444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:14.843 [2024-11-15 10:40:35.773520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.843 [2024-11-15 10:40:35.773554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:14.843 [2024-11-15 10:40:35.773568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.843 [2024-11-15 10:40:35.776277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.843 [2024-11-15 10:40:35.776321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:14.843 pt2 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.843 malloc3 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.843 [2024-11-15 10:40:35.840137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:14.843 [2024-11-15 10:40:35.840202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.843 [2024-11-15 10:40:35.840236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:14.843 [2024-11-15 10:40:35.840252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.843 [2024-11-15 10:40:35.843031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.843 [2024-11-15 10:40:35.843077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:14.843 pt3 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.843 malloc4 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.843 [2024-11-15 10:40:35.895685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:14.843 [2024-11-15 10:40:35.895882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.843 [2024-11-15 10:40:35.895955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:14.843 [2024-11-15 10:40:35.896062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.843 [2024-11-15 10:40:35.898866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.843 [2024-11-15 10:40:35.899017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:14.843 pt4 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.843 [2024-11-15 10:40:35.907749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:14.843 [2024-11-15 10:40:35.910243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:14.843 [2024-11-15 10:40:35.910456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:14.843 [2024-11-15 10:40:35.910682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:14.843 [2024-11-15 10:40:35.910987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:14.843 [2024-11-15 10:40:35.911107] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:14.843 [2024-11-15 10:40:35.911519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:14.843 [2024-11-15 10:40:35.911870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:14.843 [2024-11-15 10:40:35.912005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:14.843 [2024-11-15 10:40:35.912358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.843 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.843 "name": "raid_bdev1", 00:12:14.843 "uuid": "a44caee0-39d9-40b4-8576-7c036db30b91", 00:12:14.843 "strip_size_kb": 0, 00:12:14.843 "state": "online", 00:12:14.843 "raid_level": "raid1", 00:12:14.843 "superblock": true, 00:12:14.843 "num_base_bdevs": 4, 00:12:14.843 "num_base_bdevs_discovered": 4, 00:12:14.843 "num_base_bdevs_operational": 4, 00:12:14.843 "base_bdevs_list": [ 00:12:14.843 { 00:12:14.843 "name": "pt1", 00:12:14.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.843 "is_configured": true, 00:12:14.843 "data_offset": 2048, 00:12:14.843 "data_size": 63488 00:12:14.843 }, 00:12:14.843 { 00:12:14.843 "name": "pt2", 00:12:14.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.843 "is_configured": true, 00:12:14.843 "data_offset": 2048, 00:12:14.843 "data_size": 63488 00:12:14.843 }, 00:12:14.843 { 00:12:14.843 "name": "pt3", 00:12:14.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.843 "is_configured": true, 00:12:14.843 "data_offset": 2048, 00:12:14.843 "data_size": 63488 00:12:14.843 }, 00:12:14.843 { 00:12:14.843 "name": "pt4", 00:12:14.844 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:14.844 "is_configured": true, 00:12:14.844 "data_offset": 2048, 00:12:14.844 "data_size": 63488 00:12:14.844 } 00:12:14.844 ] 00:12:14.844 }' 00:12:14.844 10:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.844 10:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.410 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:15.410 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:15.410 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:15.410 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:15.410 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:15.410 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:15.410 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:15.410 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:15.410 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.410 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.410 [2024-11-15 10:40:36.400932] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.410 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.410 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:15.410 "name": "raid_bdev1", 00:12:15.410 "aliases": [ 00:12:15.410 "a44caee0-39d9-40b4-8576-7c036db30b91" 00:12:15.410 ], 00:12:15.410 "product_name": "Raid Volume", 00:12:15.410 "block_size": 512, 00:12:15.410 "num_blocks": 63488, 00:12:15.410 "uuid": "a44caee0-39d9-40b4-8576-7c036db30b91", 00:12:15.410 "assigned_rate_limits": { 00:12:15.410 "rw_ios_per_sec": 0, 00:12:15.410 "rw_mbytes_per_sec": 0, 00:12:15.410 "r_mbytes_per_sec": 0, 00:12:15.410 "w_mbytes_per_sec": 0 00:12:15.410 }, 00:12:15.410 "claimed": false, 00:12:15.410 "zoned": false, 00:12:15.410 "supported_io_types": { 00:12:15.410 "read": true, 00:12:15.410 "write": true, 00:12:15.410 "unmap": false, 00:12:15.410 "flush": false, 00:12:15.410 "reset": true, 00:12:15.410 "nvme_admin": false, 00:12:15.410 "nvme_io": false, 00:12:15.410 "nvme_io_md": false, 00:12:15.410 "write_zeroes": true, 00:12:15.410 "zcopy": false, 00:12:15.410 "get_zone_info": false, 00:12:15.410 "zone_management": false, 00:12:15.410 "zone_append": false, 00:12:15.410 "compare": false, 00:12:15.410 "compare_and_write": false, 00:12:15.410 "abort": false, 00:12:15.410 "seek_hole": false, 00:12:15.410 "seek_data": false, 00:12:15.410 "copy": false, 00:12:15.410 "nvme_iov_md": false 00:12:15.410 }, 00:12:15.410 "memory_domains": [ 00:12:15.410 { 00:12:15.410 "dma_device_id": "system", 00:12:15.410 "dma_device_type": 1 00:12:15.410 }, 00:12:15.410 { 00:12:15.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.410 "dma_device_type": 2 00:12:15.410 }, 00:12:15.410 { 00:12:15.410 "dma_device_id": "system", 00:12:15.410 "dma_device_type": 1 00:12:15.410 }, 00:12:15.410 { 00:12:15.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.410 "dma_device_type": 2 00:12:15.410 }, 00:12:15.410 { 00:12:15.410 "dma_device_id": "system", 00:12:15.410 "dma_device_type": 1 00:12:15.410 }, 00:12:15.410 { 00:12:15.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.410 "dma_device_type": 2 00:12:15.410 }, 00:12:15.410 { 00:12:15.410 "dma_device_id": "system", 00:12:15.410 "dma_device_type": 1 00:12:15.410 }, 00:12:15.410 { 00:12:15.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.410 "dma_device_type": 2 00:12:15.410 } 00:12:15.410 ], 00:12:15.410 "driver_specific": { 00:12:15.410 "raid": { 00:12:15.410 "uuid": "a44caee0-39d9-40b4-8576-7c036db30b91", 00:12:15.410 "strip_size_kb": 0, 00:12:15.410 "state": "online", 00:12:15.410 "raid_level": "raid1", 00:12:15.410 "superblock": true, 00:12:15.410 "num_base_bdevs": 4, 00:12:15.410 "num_base_bdevs_discovered": 4, 00:12:15.410 "num_base_bdevs_operational": 4, 00:12:15.410 "base_bdevs_list": [ 00:12:15.410 { 00:12:15.411 "name": "pt1", 00:12:15.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.411 "is_configured": true, 00:12:15.411 "data_offset": 2048, 00:12:15.411 "data_size": 63488 00:12:15.411 }, 00:12:15.411 { 00:12:15.411 "name": "pt2", 00:12:15.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.411 "is_configured": true, 00:12:15.411 "data_offset": 2048, 00:12:15.411 "data_size": 63488 00:12:15.411 }, 00:12:15.411 { 00:12:15.411 "name": "pt3", 00:12:15.411 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.411 "is_configured": true, 00:12:15.411 "data_offset": 2048, 00:12:15.411 "data_size": 63488 00:12:15.411 }, 00:12:15.411 { 00:12:15.411 "name": "pt4", 00:12:15.411 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:15.411 "is_configured": true, 00:12:15.411 "data_offset": 2048, 00:12:15.411 "data_size": 63488 00:12:15.411 } 00:12:15.411 ] 00:12:15.411 } 00:12:15.411 } 00:12:15.411 }' 00:12:15.411 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:15.411 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:15.411 pt2 00:12:15.411 pt3 00:12:15.411 pt4' 00:12:15.411 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.411 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:15.411 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.411 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:15.411 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.411 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.411 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.411 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:15.670 [2024-11-15 10:40:36.740912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a44caee0-39d9-40b4-8576-7c036db30b91 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a44caee0-39d9-40b4-8576-7c036db30b91 ']' 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.670 [2024-11-15 10:40:36.792522] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.670 [2024-11-15 10:40:36.792553] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.670 [2024-11-15 10:40:36.792651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.670 [2024-11-15 10:40:36.792780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.670 [2024-11-15 10:40:36.792804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:15.670 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.929 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:15.929 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:15.929 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.929 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:15.929 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.929 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.929 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.929 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.929 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:15.929 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.929 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.930 [2024-11-15 10:40:36.956571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:15.930 [2024-11-15 10:40:36.959014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:15.930 [2024-11-15 10:40:36.959085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:15.930 [2024-11-15 10:40:36.959143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:15.930 [2024-11-15 10:40:36.959213] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:15.930 [2024-11-15 10:40:36.959284] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:15.930 [2024-11-15 10:40:36.959319] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:15.930 [2024-11-15 10:40:36.959367] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:15.930 [2024-11-15 10:40:36.959388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.930 [2024-11-15 10:40:36.959405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:15.930 request: 00:12:15.930 { 00:12:15.930 "name": "raid_bdev1", 00:12:15.930 "raid_level": "raid1", 00:12:15.930 "base_bdevs": [ 00:12:15.930 "malloc1", 00:12:15.930 "malloc2", 00:12:15.930 "malloc3", 00:12:15.930 "malloc4" 00:12:15.930 ], 00:12:15.930 "superblock": false, 00:12:15.930 "method": "bdev_raid_create", 00:12:15.930 "req_id": 1 00:12:15.930 } 00:12:15.930 Got JSON-RPC error response 00:12:15.930 response: 00:12:15.930 { 00:12:15.930 "code": -17, 00:12:15.930 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:15.930 } 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:15.930 10:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.930 [2024-11-15 10:40:37.024553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:15.930 [2024-11-15 10:40:37.024752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.930 [2024-11-15 10:40:37.024818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:15.930 [2024-11-15 10:40:37.024952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.930 [2024-11-15 10:40:37.027834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.930 [2024-11-15 10:40:37.027992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:15.930 [2024-11-15 10:40:37.028178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:15.930 [2024-11-15 10:40:37.028349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:15.930 pt1 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.930 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.930 "name": "raid_bdev1", 00:12:15.930 "uuid": "a44caee0-39d9-40b4-8576-7c036db30b91", 00:12:15.930 "strip_size_kb": 0, 00:12:15.930 "state": "configuring", 00:12:15.930 "raid_level": "raid1", 00:12:15.930 "superblock": true, 00:12:15.930 "num_base_bdevs": 4, 00:12:15.930 "num_base_bdevs_discovered": 1, 00:12:15.930 "num_base_bdevs_operational": 4, 00:12:15.930 "base_bdevs_list": [ 00:12:15.930 { 00:12:15.930 "name": "pt1", 00:12:15.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.930 "is_configured": true, 00:12:15.930 "data_offset": 2048, 00:12:15.930 "data_size": 63488 00:12:15.930 }, 00:12:15.930 { 00:12:15.930 "name": null, 00:12:15.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.931 "is_configured": false, 00:12:15.931 "data_offset": 2048, 00:12:15.931 "data_size": 63488 00:12:15.931 }, 00:12:15.931 { 00:12:15.931 "name": null, 00:12:15.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.931 "is_configured": false, 00:12:15.931 "data_offset": 2048, 00:12:15.931 "data_size": 63488 00:12:15.931 }, 00:12:15.931 { 00:12:15.931 "name": null, 00:12:15.931 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:15.931 "is_configured": false, 00:12:15.931 "data_offset": 2048, 00:12:15.931 "data_size": 63488 00:12:15.931 } 00:12:15.931 ] 00:12:15.931 }' 00:12:15.931 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.931 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.513 [2024-11-15 10:40:37.512850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:16.513 [2024-11-15 10:40:37.512930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.513 [2024-11-15 10:40:37.512971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:16.513 [2024-11-15 10:40:37.512988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.513 [2024-11-15 10:40:37.513546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.513 [2024-11-15 10:40:37.513583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:16.513 [2024-11-15 10:40:37.513694] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:16.513 [2024-11-15 10:40:37.513739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:16.513 pt2 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.513 [2024-11-15 10:40:37.520832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.513 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.514 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.514 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.514 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.514 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.514 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.514 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.514 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.514 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.514 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.514 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.514 "name": "raid_bdev1", 00:12:16.514 "uuid": "a44caee0-39d9-40b4-8576-7c036db30b91", 00:12:16.514 "strip_size_kb": 0, 00:12:16.514 "state": "configuring", 00:12:16.514 "raid_level": "raid1", 00:12:16.514 "superblock": true, 00:12:16.514 "num_base_bdevs": 4, 00:12:16.514 "num_base_bdevs_discovered": 1, 00:12:16.514 "num_base_bdevs_operational": 4, 00:12:16.514 "base_bdevs_list": [ 00:12:16.514 { 00:12:16.514 "name": "pt1", 00:12:16.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.514 "is_configured": true, 00:12:16.514 "data_offset": 2048, 00:12:16.514 "data_size": 63488 00:12:16.514 }, 00:12:16.514 { 00:12:16.514 "name": null, 00:12:16.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.514 "is_configured": false, 00:12:16.514 "data_offset": 0, 00:12:16.514 "data_size": 63488 00:12:16.514 }, 00:12:16.514 { 00:12:16.514 "name": null, 00:12:16.514 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.514 "is_configured": false, 00:12:16.514 "data_offset": 2048, 00:12:16.514 "data_size": 63488 00:12:16.514 }, 00:12:16.514 { 00:12:16.514 "name": null, 00:12:16.514 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:16.514 "is_configured": false, 00:12:16.514 "data_offset": 2048, 00:12:16.514 "data_size": 63488 00:12:16.514 } 00:12:16.514 ] 00:12:16.514 }' 00:12:16.514 10:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.514 10:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.081 [2024-11-15 10:40:38.024991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:17.081 [2024-11-15 10:40:38.025074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.081 [2024-11-15 10:40:38.025114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:17.081 [2024-11-15 10:40:38.025131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.081 [2024-11-15 10:40:38.025694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.081 [2024-11-15 10:40:38.025719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:17.081 [2024-11-15 10:40:38.025824] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:17.081 [2024-11-15 10:40:38.025856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:17.081 pt2 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.081 [2024-11-15 10:40:38.036962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:17.081 [2024-11-15 10:40:38.037149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.081 [2024-11-15 10:40:38.037219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:17.081 [2024-11-15 10:40:38.037443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.081 [2024-11-15 10:40:38.037943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.081 [2024-11-15 10:40:38.038088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:17.081 [2024-11-15 10:40:38.038279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:17.081 [2024-11-15 10:40:38.038409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:17.081 pt3 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.081 [2024-11-15 10:40:38.044941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:17.081 [2024-11-15 10:40:38.045116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.081 [2024-11-15 10:40:38.045185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:17.081 [2024-11-15 10:40:38.045318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.081 [2024-11-15 10:40:38.045833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.081 [2024-11-15 10:40:38.045964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:17.081 [2024-11-15 10:40:38.046152] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:17.081 [2024-11-15 10:40:38.046283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:17.081 [2024-11-15 10:40:38.046624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:17.081 [2024-11-15 10:40:38.046741] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:17.081 [2024-11-15 10:40:38.047103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:17.081 [2024-11-15 10:40:38.047404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:17.081 [2024-11-15 10:40:38.047540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:17.081 [2024-11-15 10:40:38.047813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.081 pt4 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.081 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.081 "name": "raid_bdev1", 00:12:17.081 "uuid": "a44caee0-39d9-40b4-8576-7c036db30b91", 00:12:17.081 "strip_size_kb": 0, 00:12:17.081 "state": "online", 00:12:17.081 "raid_level": "raid1", 00:12:17.081 "superblock": true, 00:12:17.081 "num_base_bdevs": 4, 00:12:17.081 "num_base_bdevs_discovered": 4, 00:12:17.081 "num_base_bdevs_operational": 4, 00:12:17.081 "base_bdevs_list": [ 00:12:17.081 { 00:12:17.081 "name": "pt1", 00:12:17.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.081 "is_configured": true, 00:12:17.081 "data_offset": 2048, 00:12:17.081 "data_size": 63488 00:12:17.081 }, 00:12:17.081 { 00:12:17.081 "name": "pt2", 00:12:17.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.081 "is_configured": true, 00:12:17.081 "data_offset": 2048, 00:12:17.082 "data_size": 63488 00:12:17.082 }, 00:12:17.082 { 00:12:17.082 "name": "pt3", 00:12:17.082 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.082 "is_configured": true, 00:12:17.082 "data_offset": 2048, 00:12:17.082 "data_size": 63488 00:12:17.082 }, 00:12:17.082 { 00:12:17.082 "name": "pt4", 00:12:17.082 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.082 "is_configured": true, 00:12:17.082 "data_offset": 2048, 00:12:17.082 "data_size": 63488 00:12:17.082 } 00:12:17.082 ] 00:12:17.082 }' 00:12:17.082 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.082 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.648 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:17.648 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:17.648 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:17.648 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:17.648 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:17.648 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:17.648 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:17.648 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:17.648 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.648 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.648 [2024-11-15 10:40:38.577573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.648 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.648 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:17.648 "name": "raid_bdev1", 00:12:17.648 "aliases": [ 00:12:17.648 "a44caee0-39d9-40b4-8576-7c036db30b91" 00:12:17.648 ], 00:12:17.648 "product_name": "Raid Volume", 00:12:17.648 "block_size": 512, 00:12:17.648 "num_blocks": 63488, 00:12:17.648 "uuid": "a44caee0-39d9-40b4-8576-7c036db30b91", 00:12:17.648 "assigned_rate_limits": { 00:12:17.648 "rw_ios_per_sec": 0, 00:12:17.648 "rw_mbytes_per_sec": 0, 00:12:17.648 "r_mbytes_per_sec": 0, 00:12:17.648 "w_mbytes_per_sec": 0 00:12:17.648 }, 00:12:17.648 "claimed": false, 00:12:17.648 "zoned": false, 00:12:17.648 "supported_io_types": { 00:12:17.648 "read": true, 00:12:17.648 "write": true, 00:12:17.648 "unmap": false, 00:12:17.648 "flush": false, 00:12:17.648 "reset": true, 00:12:17.648 "nvme_admin": false, 00:12:17.648 "nvme_io": false, 00:12:17.648 "nvme_io_md": false, 00:12:17.648 "write_zeroes": true, 00:12:17.648 "zcopy": false, 00:12:17.648 "get_zone_info": false, 00:12:17.648 "zone_management": false, 00:12:17.648 "zone_append": false, 00:12:17.648 "compare": false, 00:12:17.648 "compare_and_write": false, 00:12:17.648 "abort": false, 00:12:17.648 "seek_hole": false, 00:12:17.648 "seek_data": false, 00:12:17.648 "copy": false, 00:12:17.648 "nvme_iov_md": false 00:12:17.648 }, 00:12:17.648 "memory_domains": [ 00:12:17.648 { 00:12:17.648 "dma_device_id": "system", 00:12:17.648 "dma_device_type": 1 00:12:17.648 }, 00:12:17.648 { 00:12:17.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.648 "dma_device_type": 2 00:12:17.648 }, 00:12:17.648 { 00:12:17.648 "dma_device_id": "system", 00:12:17.648 "dma_device_type": 1 00:12:17.648 }, 00:12:17.648 { 00:12:17.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.648 "dma_device_type": 2 00:12:17.648 }, 00:12:17.648 { 00:12:17.648 "dma_device_id": "system", 00:12:17.648 "dma_device_type": 1 00:12:17.648 }, 00:12:17.648 { 00:12:17.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.648 "dma_device_type": 2 00:12:17.648 }, 00:12:17.648 { 00:12:17.648 "dma_device_id": "system", 00:12:17.648 "dma_device_type": 1 00:12:17.648 }, 00:12:17.648 { 00:12:17.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.648 "dma_device_type": 2 00:12:17.648 } 00:12:17.648 ], 00:12:17.648 "driver_specific": { 00:12:17.648 "raid": { 00:12:17.648 "uuid": "a44caee0-39d9-40b4-8576-7c036db30b91", 00:12:17.648 "strip_size_kb": 0, 00:12:17.648 "state": "online", 00:12:17.648 "raid_level": "raid1", 00:12:17.648 "superblock": true, 00:12:17.648 "num_base_bdevs": 4, 00:12:17.648 "num_base_bdevs_discovered": 4, 00:12:17.648 "num_base_bdevs_operational": 4, 00:12:17.648 "base_bdevs_list": [ 00:12:17.648 { 00:12:17.648 "name": "pt1", 00:12:17.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.648 "is_configured": true, 00:12:17.648 "data_offset": 2048, 00:12:17.648 "data_size": 63488 00:12:17.648 }, 00:12:17.649 { 00:12:17.649 "name": "pt2", 00:12:17.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.649 "is_configured": true, 00:12:17.649 "data_offset": 2048, 00:12:17.649 "data_size": 63488 00:12:17.649 }, 00:12:17.649 { 00:12:17.649 "name": "pt3", 00:12:17.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.649 "is_configured": true, 00:12:17.649 "data_offset": 2048, 00:12:17.649 "data_size": 63488 00:12:17.649 }, 00:12:17.649 { 00:12:17.649 "name": "pt4", 00:12:17.649 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.649 "is_configured": true, 00:12:17.649 "data_offset": 2048, 00:12:17.649 "data_size": 63488 00:12:17.649 } 00:12:17.649 ] 00:12:17.649 } 00:12:17.649 } 00:12:17.649 }' 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:17.649 pt2 00:12:17.649 pt3 00:12:17.649 pt4' 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.649 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.907 [2024-11-15 10:40:38.953621] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a44caee0-39d9-40b4-8576-7c036db30b91 '!=' a44caee0-39d9-40b4-8576-7c036db30b91 ']' 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.907 10:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.907 [2024-11-15 10:40:39.001323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.907 "name": "raid_bdev1", 00:12:17.907 "uuid": "a44caee0-39d9-40b4-8576-7c036db30b91", 00:12:17.907 "strip_size_kb": 0, 00:12:17.907 "state": "online", 00:12:17.907 "raid_level": "raid1", 00:12:17.907 "superblock": true, 00:12:17.907 "num_base_bdevs": 4, 00:12:17.907 "num_base_bdevs_discovered": 3, 00:12:17.907 "num_base_bdevs_operational": 3, 00:12:17.907 "base_bdevs_list": [ 00:12:17.907 { 00:12:17.907 "name": null, 00:12:17.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.907 "is_configured": false, 00:12:17.907 "data_offset": 0, 00:12:17.907 "data_size": 63488 00:12:17.907 }, 00:12:17.907 { 00:12:17.907 "name": "pt2", 00:12:17.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.907 "is_configured": true, 00:12:17.907 "data_offset": 2048, 00:12:17.907 "data_size": 63488 00:12:17.907 }, 00:12:17.907 { 00:12:17.907 "name": "pt3", 00:12:17.907 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.907 "is_configured": true, 00:12:17.907 "data_offset": 2048, 00:12:17.907 "data_size": 63488 00:12:17.907 }, 00:12:17.907 { 00:12:17.907 "name": "pt4", 00:12:17.907 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.907 "is_configured": true, 00:12:17.907 "data_offset": 2048, 00:12:17.907 "data_size": 63488 00:12:17.907 } 00:12:17.907 ] 00:12:17.907 }' 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.907 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.474 [2024-11-15 10:40:39.561381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.474 [2024-11-15 10:40:39.561421] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.474 [2024-11-15 10:40:39.561698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.474 [2024-11-15 10:40:39.561855] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.474 [2024-11-15 10:40:39.561878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.474 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.733 [2024-11-15 10:40:39.653433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:18.733 [2024-11-15 10:40:39.653661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.733 [2024-11-15 10:40:39.653705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:18.733 [2024-11-15 10:40:39.653721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.733 [2024-11-15 10:40:39.656640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.733 [2024-11-15 10:40:39.656799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:18.733 [2024-11-15 10:40:39.656934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:18.733 [2024-11-15 10:40:39.656995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:18.733 pt2 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:18.733 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.734 "name": "raid_bdev1", 00:12:18.734 "uuid": "a44caee0-39d9-40b4-8576-7c036db30b91", 00:12:18.734 "strip_size_kb": 0, 00:12:18.734 "state": "configuring", 00:12:18.734 "raid_level": "raid1", 00:12:18.734 "superblock": true, 00:12:18.734 "num_base_bdevs": 4, 00:12:18.734 "num_base_bdevs_discovered": 1, 00:12:18.734 "num_base_bdevs_operational": 3, 00:12:18.734 "base_bdevs_list": [ 00:12:18.734 { 00:12:18.734 "name": null, 00:12:18.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.734 "is_configured": false, 00:12:18.734 "data_offset": 2048, 00:12:18.734 "data_size": 63488 00:12:18.734 }, 00:12:18.734 { 00:12:18.734 "name": "pt2", 00:12:18.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.734 "is_configured": true, 00:12:18.734 "data_offset": 2048, 00:12:18.734 "data_size": 63488 00:12:18.734 }, 00:12:18.734 { 00:12:18.734 "name": null, 00:12:18.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.734 "is_configured": false, 00:12:18.734 "data_offset": 2048, 00:12:18.734 "data_size": 63488 00:12:18.734 }, 00:12:18.734 { 00:12:18.734 "name": null, 00:12:18.734 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:18.734 "is_configured": false, 00:12:18.734 "data_offset": 2048, 00:12:18.734 "data_size": 63488 00:12:18.734 } 00:12:18.734 ] 00:12:18.734 }' 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.734 10:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.299 [2024-11-15 10:40:40.189634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:19.299 [2024-11-15 10:40:40.189722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.299 [2024-11-15 10:40:40.189755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:19.299 [2024-11-15 10:40:40.189770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.299 [2024-11-15 10:40:40.190335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.299 [2024-11-15 10:40:40.190360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:19.299 [2024-11-15 10:40:40.190461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:19.299 [2024-11-15 10:40:40.190493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:19.299 pt3 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.299 "name": "raid_bdev1", 00:12:19.299 "uuid": "a44caee0-39d9-40b4-8576-7c036db30b91", 00:12:19.299 "strip_size_kb": 0, 00:12:19.299 "state": "configuring", 00:12:19.299 "raid_level": "raid1", 00:12:19.299 "superblock": true, 00:12:19.299 "num_base_bdevs": 4, 00:12:19.299 "num_base_bdevs_discovered": 2, 00:12:19.299 "num_base_bdevs_operational": 3, 00:12:19.299 "base_bdevs_list": [ 00:12:19.299 { 00:12:19.299 "name": null, 00:12:19.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.299 "is_configured": false, 00:12:19.299 "data_offset": 2048, 00:12:19.299 "data_size": 63488 00:12:19.299 }, 00:12:19.299 { 00:12:19.299 "name": "pt2", 00:12:19.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.299 "is_configured": true, 00:12:19.299 "data_offset": 2048, 00:12:19.299 "data_size": 63488 00:12:19.299 }, 00:12:19.299 { 00:12:19.299 "name": "pt3", 00:12:19.299 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.299 "is_configured": true, 00:12:19.299 "data_offset": 2048, 00:12:19.299 "data_size": 63488 00:12:19.299 }, 00:12:19.299 { 00:12:19.299 "name": null, 00:12:19.299 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:19.299 "is_configured": false, 00:12:19.299 "data_offset": 2048, 00:12:19.299 "data_size": 63488 00:12:19.299 } 00:12:19.299 ] 00:12:19.299 }' 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.299 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.558 [2024-11-15 10:40:40.709820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:19.558 [2024-11-15 10:40:40.710025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.558 [2024-11-15 10:40:40.710071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:19.558 [2024-11-15 10:40:40.710088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.558 [2024-11-15 10:40:40.710664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.558 [2024-11-15 10:40:40.710689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:19.558 [2024-11-15 10:40:40.710794] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:19.558 [2024-11-15 10:40:40.710834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:19.558 [2024-11-15 10:40:40.711005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:19.558 [2024-11-15 10:40:40.711021] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:19.558 [2024-11-15 10:40:40.711327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:19.558 [2024-11-15 10:40:40.711532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:19.558 [2024-11-15 10:40:40.711552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:19.558 [2024-11-15 10:40:40.711717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.558 pt4 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.558 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.816 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.816 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.816 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.816 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.816 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.816 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.816 "name": "raid_bdev1", 00:12:19.816 "uuid": "a44caee0-39d9-40b4-8576-7c036db30b91", 00:12:19.816 "strip_size_kb": 0, 00:12:19.816 "state": "online", 00:12:19.816 "raid_level": "raid1", 00:12:19.816 "superblock": true, 00:12:19.816 "num_base_bdevs": 4, 00:12:19.816 "num_base_bdevs_discovered": 3, 00:12:19.816 "num_base_bdevs_operational": 3, 00:12:19.816 "base_bdevs_list": [ 00:12:19.816 { 00:12:19.816 "name": null, 00:12:19.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.816 "is_configured": false, 00:12:19.816 "data_offset": 2048, 00:12:19.816 "data_size": 63488 00:12:19.816 }, 00:12:19.816 { 00:12:19.816 "name": "pt2", 00:12:19.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.816 "is_configured": true, 00:12:19.816 "data_offset": 2048, 00:12:19.816 "data_size": 63488 00:12:19.816 }, 00:12:19.816 { 00:12:19.816 "name": "pt3", 00:12:19.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.816 "is_configured": true, 00:12:19.816 "data_offset": 2048, 00:12:19.816 "data_size": 63488 00:12:19.816 }, 00:12:19.816 { 00:12:19.816 "name": "pt4", 00:12:19.816 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:19.816 "is_configured": true, 00:12:19.816 "data_offset": 2048, 00:12:19.816 "data_size": 63488 00:12:19.816 } 00:12:19.816 ] 00:12:19.816 }' 00:12:19.816 10:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.816 10:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.382 [2024-11-15 10:40:41.257914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:20.382 [2024-11-15 10:40:41.257950] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.382 [2024-11-15 10:40:41.258054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.382 [2024-11-15 10:40:41.258150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.382 [2024-11-15 10:40:41.258171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.382 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.382 [2024-11-15 10:40:41.329912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:20.382 [2024-11-15 10:40:41.329991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.382 [2024-11-15 10:40:41.330019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:20.382 [2024-11-15 10:40:41.330036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.382 [2024-11-15 10:40:41.333096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.382 [2024-11-15 10:40:41.333177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:20.382 [2024-11-15 10:40:41.333286] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:20.382 [2024-11-15 10:40:41.333348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:20.382 [2024-11-15 10:40:41.333519] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:20.382 [2024-11-15 10:40:41.333560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:20.382 [2024-11-15 10:40:41.333583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:20.382 [2024-11-15 10:40:41.333674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:20.383 [2024-11-15 10:40:41.333849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:20.383 pt1 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.383 "name": "raid_bdev1", 00:12:20.383 "uuid": "a44caee0-39d9-40b4-8576-7c036db30b91", 00:12:20.383 "strip_size_kb": 0, 00:12:20.383 "state": "configuring", 00:12:20.383 "raid_level": "raid1", 00:12:20.383 "superblock": true, 00:12:20.383 "num_base_bdevs": 4, 00:12:20.383 "num_base_bdevs_discovered": 2, 00:12:20.383 "num_base_bdevs_operational": 3, 00:12:20.383 "base_bdevs_list": [ 00:12:20.383 { 00:12:20.383 "name": null, 00:12:20.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.383 "is_configured": false, 00:12:20.383 "data_offset": 2048, 00:12:20.383 "data_size": 63488 00:12:20.383 }, 00:12:20.383 { 00:12:20.383 "name": "pt2", 00:12:20.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.383 "is_configured": true, 00:12:20.383 "data_offset": 2048, 00:12:20.383 "data_size": 63488 00:12:20.383 }, 00:12:20.383 { 00:12:20.383 "name": "pt3", 00:12:20.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:20.383 "is_configured": true, 00:12:20.383 "data_offset": 2048, 00:12:20.383 "data_size": 63488 00:12:20.383 }, 00:12:20.383 { 00:12:20.383 "name": null, 00:12:20.383 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:20.383 "is_configured": false, 00:12:20.383 "data_offset": 2048, 00:12:20.383 "data_size": 63488 00:12:20.383 } 00:12:20.383 ] 00:12:20.383 }' 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.383 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.950 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:20.950 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.950 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.950 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:20.950 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.950 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:20.950 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:20.950 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.950 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.950 [2024-11-15 10:40:41.922253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:20.950 [2024-11-15 10:40:41.922333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.950 [2024-11-15 10:40:41.922368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:20.950 [2024-11-15 10:40:41.922383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.950 [2024-11-15 10:40:41.922945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.950 [2024-11-15 10:40:41.922970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:20.950 [2024-11-15 10:40:41.923083] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:20.950 [2024-11-15 10:40:41.923130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:20.951 [2024-11-15 10:40:41.923330] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:20.951 [2024-11-15 10:40:41.923354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:20.951 [2024-11-15 10:40:41.923790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:20.951 [2024-11-15 10:40:41.924083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:20.951 [2024-11-15 10:40:41.924107] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:20.951 [2024-11-15 10:40:41.924334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.951 pt4 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.951 "name": "raid_bdev1", 00:12:20.951 "uuid": "a44caee0-39d9-40b4-8576-7c036db30b91", 00:12:20.951 "strip_size_kb": 0, 00:12:20.951 "state": "online", 00:12:20.951 "raid_level": "raid1", 00:12:20.951 "superblock": true, 00:12:20.951 "num_base_bdevs": 4, 00:12:20.951 "num_base_bdevs_discovered": 3, 00:12:20.951 "num_base_bdevs_operational": 3, 00:12:20.951 "base_bdevs_list": [ 00:12:20.951 { 00:12:20.951 "name": null, 00:12:20.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.951 "is_configured": false, 00:12:20.951 "data_offset": 2048, 00:12:20.951 "data_size": 63488 00:12:20.951 }, 00:12:20.951 { 00:12:20.951 "name": "pt2", 00:12:20.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.951 "is_configured": true, 00:12:20.951 "data_offset": 2048, 00:12:20.951 "data_size": 63488 00:12:20.951 }, 00:12:20.951 { 00:12:20.951 "name": "pt3", 00:12:20.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:20.951 "is_configured": true, 00:12:20.951 "data_offset": 2048, 00:12:20.951 "data_size": 63488 00:12:20.951 }, 00:12:20.951 { 00:12:20.951 "name": "pt4", 00:12:20.951 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:20.951 "is_configured": true, 00:12:20.951 "data_offset": 2048, 00:12:20.951 "data_size": 63488 00:12:20.951 } 00:12:20.951 ] 00:12:20.951 }' 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.951 10:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.519 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:21.519 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.519 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:21.519 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.519 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.520 [2024-11-15 10:40:42.534761] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a44caee0-39d9-40b4-8576-7c036db30b91 '!=' a44caee0-39d9-40b4-8576-7c036db30b91 ']' 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74655 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74655 ']' 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74655 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74655 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.520 killing process with pid 74655 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74655' 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74655 00:12:21.520 [2024-11-15 10:40:42.612159] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:21.520 10:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74655 00:12:21.520 [2024-11-15 10:40:42.612278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.520 [2024-11-15 10:40:42.612385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:21.520 [2024-11-15 10:40:42.612405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:22.086 [2024-11-15 10:40:42.963773] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:23.021 10:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:23.021 00:12:23.021 real 0m9.386s 00:12:23.021 user 0m15.445s 00:12:23.021 sys 0m1.356s 00:12:23.021 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.021 10:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.021 ************************************ 00:12:23.021 END TEST raid_superblock_test 00:12:23.021 ************************************ 00:12:23.021 10:40:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:23.021 10:40:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:23.021 10:40:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.021 10:40:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:23.021 ************************************ 00:12:23.021 START TEST raid_read_error_test 00:12:23.021 ************************************ 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:23.021 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:23.022 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:23.022 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:23.022 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8RHYSZGDC4 00:12:23.022 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75154 00:12:23.022 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75154 00:12:23.022 10:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75154 ']' 00:12:23.022 10:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.022 10:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:23.022 10:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.022 10:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.022 10:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.022 10:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.022 [2024-11-15 10:40:44.159755] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:12:23.022 [2024-11-15 10:40:44.159932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75154 ] 00:12:23.280 [2024-11-15 10:40:44.337776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.539 [2024-11-15 10:40:44.473331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.539 [2024-11-15 10:40:44.678249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.539 [2024-11-15 10:40:44.678327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.217 BaseBdev1_malloc 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.217 true 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.217 [2024-11-15 10:40:45.217436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:24.217 [2024-11-15 10:40:45.217516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.217 [2024-11-15 10:40:45.217547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:24.217 [2024-11-15 10:40:45.217564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.217 [2024-11-15 10:40:45.220255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.217 [2024-11-15 10:40:45.220443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:24.217 BaseBdev1 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.217 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.217 BaseBdev2_malloc 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.218 true 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.218 [2024-11-15 10:40:45.273314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:24.218 [2024-11-15 10:40:45.273537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.218 [2024-11-15 10:40:45.273575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:24.218 [2024-11-15 10:40:45.273594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.218 [2024-11-15 10:40:45.276426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.218 [2024-11-15 10:40:45.276476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:24.218 BaseBdev2 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.218 BaseBdev3_malloc 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.218 true 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.218 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.476 [2024-11-15 10:40:45.338838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:24.476 [2024-11-15 10:40:45.338905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.476 [2024-11-15 10:40:45.338932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:24.476 [2024-11-15 10:40:45.338950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.476 [2024-11-15 10:40:45.341744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.476 [2024-11-15 10:40:45.341916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:24.476 BaseBdev3 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.476 BaseBdev4_malloc 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.476 true 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.476 [2024-11-15 10:40:45.395675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:24.476 [2024-11-15 10:40:45.395744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.476 [2024-11-15 10:40:45.395772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:24.476 [2024-11-15 10:40:45.395789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.476 [2024-11-15 10:40:45.398562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.476 [2024-11-15 10:40:45.398612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:24.476 BaseBdev4 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.476 [2024-11-15 10:40:45.403754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:24.476 [2024-11-15 10:40:45.406167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.476 [2024-11-15 10:40:45.406272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.476 [2024-11-15 10:40:45.406376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:24.476 [2024-11-15 10:40:45.406702] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:24.476 [2024-11-15 10:40:45.406736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:24.476 [2024-11-15 10:40:45.407037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:24.476 [2024-11-15 10:40:45.407249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:24.476 [2024-11-15 10:40:45.407265] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:24.476 [2024-11-15 10:40:45.407458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.476 "name": "raid_bdev1", 00:12:24.476 "uuid": "d012b585-d66f-4e1c-8489-6a24e2ccd1e2", 00:12:24.476 "strip_size_kb": 0, 00:12:24.476 "state": "online", 00:12:24.476 "raid_level": "raid1", 00:12:24.476 "superblock": true, 00:12:24.476 "num_base_bdevs": 4, 00:12:24.476 "num_base_bdevs_discovered": 4, 00:12:24.476 "num_base_bdevs_operational": 4, 00:12:24.476 "base_bdevs_list": [ 00:12:24.476 { 00:12:24.476 "name": "BaseBdev1", 00:12:24.476 "uuid": "e38afe86-5ea1-5aef-8034-45d1f3604a69", 00:12:24.476 "is_configured": true, 00:12:24.476 "data_offset": 2048, 00:12:24.476 "data_size": 63488 00:12:24.476 }, 00:12:24.476 { 00:12:24.476 "name": "BaseBdev2", 00:12:24.476 "uuid": "97d4bd56-4d93-55b1-974f-2f891e6c1c3e", 00:12:24.476 "is_configured": true, 00:12:24.476 "data_offset": 2048, 00:12:24.476 "data_size": 63488 00:12:24.476 }, 00:12:24.476 { 00:12:24.476 "name": "BaseBdev3", 00:12:24.476 "uuid": "9aeba311-dac4-586a-8b38-9586d99194f2", 00:12:24.476 "is_configured": true, 00:12:24.476 "data_offset": 2048, 00:12:24.476 "data_size": 63488 00:12:24.476 }, 00:12:24.476 { 00:12:24.476 "name": "BaseBdev4", 00:12:24.476 "uuid": "1a11d178-4dc7-5b11-a9ec-2d24dcd2bb20", 00:12:24.476 "is_configured": true, 00:12:24.476 "data_offset": 2048, 00:12:24.476 "data_size": 63488 00:12:24.476 } 00:12:24.476 ] 00:12:24.476 }' 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.476 10:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.043 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:25.043 10:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:25.043 [2024-11-15 10:40:46.049275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.980 "name": "raid_bdev1", 00:12:25.980 "uuid": "d012b585-d66f-4e1c-8489-6a24e2ccd1e2", 00:12:25.980 "strip_size_kb": 0, 00:12:25.980 "state": "online", 00:12:25.980 "raid_level": "raid1", 00:12:25.980 "superblock": true, 00:12:25.980 "num_base_bdevs": 4, 00:12:25.980 "num_base_bdevs_discovered": 4, 00:12:25.980 "num_base_bdevs_operational": 4, 00:12:25.980 "base_bdevs_list": [ 00:12:25.980 { 00:12:25.980 "name": "BaseBdev1", 00:12:25.980 "uuid": "e38afe86-5ea1-5aef-8034-45d1f3604a69", 00:12:25.980 "is_configured": true, 00:12:25.980 "data_offset": 2048, 00:12:25.980 "data_size": 63488 00:12:25.980 }, 00:12:25.980 { 00:12:25.980 "name": "BaseBdev2", 00:12:25.980 "uuid": "97d4bd56-4d93-55b1-974f-2f891e6c1c3e", 00:12:25.980 "is_configured": true, 00:12:25.980 "data_offset": 2048, 00:12:25.980 "data_size": 63488 00:12:25.980 }, 00:12:25.980 { 00:12:25.980 "name": "BaseBdev3", 00:12:25.980 "uuid": "9aeba311-dac4-586a-8b38-9586d99194f2", 00:12:25.980 "is_configured": true, 00:12:25.980 "data_offset": 2048, 00:12:25.980 "data_size": 63488 00:12:25.980 }, 00:12:25.980 { 00:12:25.980 "name": "BaseBdev4", 00:12:25.980 "uuid": "1a11d178-4dc7-5b11-a9ec-2d24dcd2bb20", 00:12:25.980 "is_configured": true, 00:12:25.980 "data_offset": 2048, 00:12:25.980 "data_size": 63488 00:12:25.980 } 00:12:25.980 ] 00:12:25.980 }' 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.980 10:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.547 10:40:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:26.547 10:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.547 10:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.547 [2024-11-15 10:40:47.470870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.547 [2024-11-15 10:40:47.471083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.547 { 00:12:26.547 "results": [ 00:12:26.547 { 00:12:26.547 "job": "raid_bdev1", 00:12:26.547 "core_mask": "0x1", 00:12:26.547 "workload": "randrw", 00:12:26.547 "percentage": 50, 00:12:26.547 "status": "finished", 00:12:26.547 "queue_depth": 1, 00:12:26.547 "io_size": 131072, 00:12:26.547 "runtime": 1.419461, 00:12:26.547 "iops": 7920.612119670776, 00:12:26.547 "mibps": 990.076514958847, 00:12:26.547 "io_failed": 0, 00:12:26.547 "io_timeout": 0, 00:12:26.547 "avg_latency_us": 122.11039369951405, 00:12:26.547 "min_latency_us": 39.33090909090909, 00:12:26.547 "max_latency_us": 2040.5527272727272 00:12:26.547 } 00:12:26.547 ], 00:12:26.547 "core_count": 1 00:12:26.547 } 00:12:26.547 [2024-11-15 10:40:47.474597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.547 [2024-11-15 10:40:47.474673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.547 [2024-11-15 10:40:47.474887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.547 [2024-11-15 10:40:47.474926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:26.547 10:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.547 10:40:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75154 00:12:26.547 10:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75154 ']' 00:12:26.547 10:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75154 00:12:26.547 10:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:26.548 10:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.548 10:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75154 00:12:26.548 killing process with pid 75154 00:12:26.548 10:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.548 10:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.548 10:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75154' 00:12:26.548 10:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75154 00:12:26.548 [2024-11-15 10:40:47.512315] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:26.548 10:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75154 00:12:26.807 [2024-11-15 10:40:47.789328] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.743 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8RHYSZGDC4 00:12:27.743 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:27.743 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:27.743 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:27.743 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:27.743 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:27.743 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:27.743 10:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:27.743 00:12:27.743 real 0m4.840s 00:12:27.743 user 0m6.039s 00:12:27.743 sys 0m0.562s 00:12:27.743 10:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.743 10:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.743 ************************************ 00:12:27.743 END TEST raid_read_error_test 00:12:27.743 ************************************ 00:12:28.002 10:40:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:28.002 10:40:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:28.002 10:40:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.002 10:40:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:28.002 ************************************ 00:12:28.002 START TEST raid_write_error_test 00:12:28.002 ************************************ 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8OJIhfJ9e6 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75298 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75298 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75298 ']' 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.002 10:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.002 [2024-11-15 10:40:49.047374] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:12:28.002 [2024-11-15 10:40:49.047606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75298 ] 00:12:28.261 [2024-11-15 10:40:49.238559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.261 [2024-11-15 10:40:49.401814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.519 [2024-11-15 10:40:49.614441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.519 [2024-11-15 10:40:49.614524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.092 BaseBdev1_malloc 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.092 true 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.092 [2024-11-15 10:40:50.076058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:29.092 [2024-11-15 10:40:50.076125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.092 [2024-11-15 10:40:50.076155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:29.092 [2024-11-15 10:40:50.076174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.092 [2024-11-15 10:40:50.079221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.092 [2024-11-15 10:40:50.079274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:29.092 BaseBdev1 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.092 BaseBdev2_malloc 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.092 true 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.092 [2024-11-15 10:40:50.131753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:29.092 [2024-11-15 10:40:50.131954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.092 [2024-11-15 10:40:50.131990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:29.092 [2024-11-15 10:40:50.132011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.092 [2024-11-15 10:40:50.134768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.092 [2024-11-15 10:40:50.134818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:29.092 BaseBdev2 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.092 BaseBdev3_malloc 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.092 true 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.092 [2024-11-15 10:40:50.196911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:29.092 [2024-11-15 10:40:50.196979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.092 [2024-11-15 10:40:50.197007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:29.092 [2024-11-15 10:40:50.197026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.092 [2024-11-15 10:40:50.199845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.092 [2024-11-15 10:40:50.199895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:29.092 BaseBdev3 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.092 BaseBdev4_malloc 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.092 true 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.092 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.351 [2024-11-15 10:40:50.253138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:29.351 [2024-11-15 10:40:50.253219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.351 [2024-11-15 10:40:50.253247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:29.351 [2024-11-15 10:40:50.253265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.351 [2024-11-15 10:40:50.256182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.351 [2024-11-15 10:40:50.256248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:29.351 BaseBdev4 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.351 [2024-11-15 10:40:50.261207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.351 [2024-11-15 10:40:50.263822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:29.351 [2024-11-15 10:40:50.264045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:29.351 [2024-11-15 10:40:50.264260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:29.351 [2024-11-15 10:40:50.264703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:29.351 [2024-11-15 10:40:50.264849] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.351 [2024-11-15 10:40:50.265215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:29.351 [2024-11-15 10:40:50.265617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:29.351 [2024-11-15 10:40:50.265740] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:29.351 [2024-11-15 10:40:50.266185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.351 "name": "raid_bdev1", 00:12:29.351 "uuid": "781d43e1-8c9a-49d1-b76f-9a3c1e17435b", 00:12:29.351 "strip_size_kb": 0, 00:12:29.351 "state": "online", 00:12:29.351 "raid_level": "raid1", 00:12:29.351 "superblock": true, 00:12:29.351 "num_base_bdevs": 4, 00:12:29.351 "num_base_bdevs_discovered": 4, 00:12:29.351 "num_base_bdevs_operational": 4, 00:12:29.351 "base_bdevs_list": [ 00:12:29.351 { 00:12:29.351 "name": "BaseBdev1", 00:12:29.351 "uuid": "e2bd0aa1-85ee-5f46-8be1-adcffd590ac4", 00:12:29.351 "is_configured": true, 00:12:29.351 "data_offset": 2048, 00:12:29.351 "data_size": 63488 00:12:29.351 }, 00:12:29.351 { 00:12:29.351 "name": "BaseBdev2", 00:12:29.351 "uuid": "895dffca-4f2a-5a4d-958d-bee19551d57a", 00:12:29.351 "is_configured": true, 00:12:29.351 "data_offset": 2048, 00:12:29.351 "data_size": 63488 00:12:29.351 }, 00:12:29.351 { 00:12:29.351 "name": "BaseBdev3", 00:12:29.351 "uuid": "a7c51081-79a9-5950-8fee-fcb10a5b669d", 00:12:29.351 "is_configured": true, 00:12:29.351 "data_offset": 2048, 00:12:29.351 "data_size": 63488 00:12:29.351 }, 00:12:29.351 { 00:12:29.351 "name": "BaseBdev4", 00:12:29.351 "uuid": "52abe5c9-19ec-511d-bcf9-9967e8d7c557", 00:12:29.351 "is_configured": true, 00:12:29.351 "data_offset": 2048, 00:12:29.351 "data_size": 63488 00:12:29.351 } 00:12:29.351 ] 00:12:29.351 }' 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.351 10:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.918 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:29.918 10:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:29.918 [2024-11-15 10:40:50.895729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.852 [2024-11-15 10:40:51.801329] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:30.852 [2024-11-15 10:40:51.801558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:30.852 [2024-11-15 10:40:51.801854] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.852 "name": "raid_bdev1", 00:12:30.852 "uuid": "781d43e1-8c9a-49d1-b76f-9a3c1e17435b", 00:12:30.852 "strip_size_kb": 0, 00:12:30.852 "state": "online", 00:12:30.852 "raid_level": "raid1", 00:12:30.852 "superblock": true, 00:12:30.852 "num_base_bdevs": 4, 00:12:30.852 "num_base_bdevs_discovered": 3, 00:12:30.852 "num_base_bdevs_operational": 3, 00:12:30.852 "base_bdevs_list": [ 00:12:30.852 { 00:12:30.852 "name": null, 00:12:30.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.852 "is_configured": false, 00:12:30.852 "data_offset": 0, 00:12:30.852 "data_size": 63488 00:12:30.852 }, 00:12:30.852 { 00:12:30.852 "name": "BaseBdev2", 00:12:30.852 "uuid": "895dffca-4f2a-5a4d-958d-bee19551d57a", 00:12:30.852 "is_configured": true, 00:12:30.852 "data_offset": 2048, 00:12:30.852 "data_size": 63488 00:12:30.852 }, 00:12:30.852 { 00:12:30.852 "name": "BaseBdev3", 00:12:30.852 "uuid": "a7c51081-79a9-5950-8fee-fcb10a5b669d", 00:12:30.852 "is_configured": true, 00:12:30.852 "data_offset": 2048, 00:12:30.852 "data_size": 63488 00:12:30.852 }, 00:12:30.852 { 00:12:30.852 "name": "BaseBdev4", 00:12:30.852 "uuid": "52abe5c9-19ec-511d-bcf9-9967e8d7c557", 00:12:30.852 "is_configured": true, 00:12:30.852 "data_offset": 2048, 00:12:30.852 "data_size": 63488 00:12:30.852 } 00:12:30.852 ] 00:12:30.852 }' 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.852 10:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.419 [2024-11-15 10:40:52.333486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:31.419 [2024-11-15 10:40:52.333535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:31.419 [2024-11-15 10:40:52.336900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.419 [2024-11-15 10:40:52.336960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.419 [2024-11-15 10:40:52.337095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.419 [2024-11-15 10:40:52.337114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:31.419 { 00:12:31.419 "results": [ 00:12:31.419 { 00:12:31.419 "job": "raid_bdev1", 00:12:31.419 "core_mask": "0x1", 00:12:31.419 "workload": "randrw", 00:12:31.419 "percentage": 50, 00:12:31.419 "status": "finished", 00:12:31.419 "queue_depth": 1, 00:12:31.419 "io_size": 131072, 00:12:31.419 "runtime": 1.435266, 00:12:31.419 "iops": 8278.604802176043, 00:12:31.419 "mibps": 1034.8256002720054, 00:12:31.419 "io_failed": 0, 00:12:31.419 "io_timeout": 0, 00:12:31.419 "avg_latency_us": 116.55574115162736, 00:12:31.419 "min_latency_us": 39.33090909090909, 00:12:31.419 "max_latency_us": 1936.290909090909 00:12:31.419 } 00:12:31.419 ], 00:12:31.419 "core_count": 1 00:12:31.419 } 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75298 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75298 ']' 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75298 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75298 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.419 killing process with pid 75298 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75298' 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75298 00:12:31.419 10:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75298 00:12:31.419 [2024-11-15 10:40:52.373284] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.676 [2024-11-15 10:40:52.662871] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.610 10:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8OJIhfJ9e6 00:12:32.610 10:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:32.610 10:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:32.610 10:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:32.610 10:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:32.610 10:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:32.610 10:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:32.610 ************************************ 00:12:32.610 END TEST raid_write_error_test 00:12:32.610 ************************************ 00:12:32.610 10:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:32.610 00:12:32.610 real 0m4.819s 00:12:32.610 user 0m5.900s 00:12:32.610 sys 0m0.621s 00:12:32.610 10:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.610 10:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.879 10:40:53 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:32.879 10:40:53 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:32.879 10:40:53 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:32.879 10:40:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:32.879 10:40:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.879 10:40:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.879 ************************************ 00:12:32.879 START TEST raid_rebuild_test 00:12:32.879 ************************************ 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:32.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75443 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75443 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75443 ']' 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.879 10:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.879 [2024-11-15 10:40:53.912919] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:12:32.879 [2024-11-15 10:40:53.913310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75443 ] 00:12:32.879 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:32.879 Zero copy mechanism will not be used. 00:12:33.156 [2024-11-15 10:40:54.098563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.156 [2024-11-15 10:40:54.225908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.414 [2024-11-15 10:40:54.426742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.414 [2024-11-15 10:40:54.427005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.980 BaseBdev1_malloc 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.980 [2024-11-15 10:40:54.950477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:33.980 [2024-11-15 10:40:54.950579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.980 [2024-11-15 10:40:54.950613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:33.980 [2024-11-15 10:40:54.950632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.980 [2024-11-15 10:40:54.953459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.980 [2024-11-15 10:40:54.953651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.980 BaseBdev1 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.980 BaseBdev2_malloc 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.980 10:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.980 [2024-11-15 10:40:55.002411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:33.980 [2024-11-15 10:40:55.002485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.980 [2024-11-15 10:40:55.002530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:33.980 [2024-11-15 10:40:55.002552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.980 [2024-11-15 10:40:55.005205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.980 [2024-11-15 10:40:55.005382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:33.980 BaseBdev2 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.980 spare_malloc 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.980 spare_delay 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.980 [2024-11-15 10:40:55.075069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:33.980 [2024-11-15 10:40:55.075143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.980 [2024-11-15 10:40:55.075172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:33.980 [2024-11-15 10:40:55.075191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.980 [2024-11-15 10:40:55.077989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.980 [2024-11-15 10:40:55.078038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:33.980 spare 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.980 [2024-11-15 10:40:55.083137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.980 [2024-11-15 10:40:55.085589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.980 [2024-11-15 10:40:55.085709] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:33.980 [2024-11-15 10:40:55.085740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:33.980 [2024-11-15 10:40:55.086073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:33.980 [2024-11-15 10:40:55.086269] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:33.980 [2024-11-15 10:40:55.086288] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:33.980 [2024-11-15 10:40:55.086460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.980 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.981 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.981 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.981 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.981 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.981 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.981 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.981 "name": "raid_bdev1", 00:12:33.981 "uuid": "70535710-b36b-4e35-a644-1259fe98c3b4", 00:12:33.981 "strip_size_kb": 0, 00:12:33.981 "state": "online", 00:12:33.981 "raid_level": "raid1", 00:12:33.981 "superblock": false, 00:12:33.981 "num_base_bdevs": 2, 00:12:33.981 "num_base_bdevs_discovered": 2, 00:12:33.981 "num_base_bdevs_operational": 2, 00:12:33.981 "base_bdevs_list": [ 00:12:33.981 { 00:12:33.981 "name": "BaseBdev1", 00:12:33.981 "uuid": "bcb40fde-e6ff-5451-8b0d-35508811ba6c", 00:12:33.981 "is_configured": true, 00:12:33.981 "data_offset": 0, 00:12:33.981 "data_size": 65536 00:12:33.981 }, 00:12:33.981 { 00:12:33.981 "name": "BaseBdev2", 00:12:33.981 "uuid": "26a47a13-ab73-52e5-8243-c19ba8f4190a", 00:12:33.981 "is_configured": true, 00:12:33.981 "data_offset": 0, 00:12:33.981 "data_size": 65536 00:12:33.981 } 00:12:33.981 ] 00:12:33.981 }' 00:12:33.981 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.981 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.547 [2024-11-15 10:40:55.575670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:34.547 10:40:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:34.805 [2024-11-15 10:40:55.907429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:34.805 /dev/nbd0 00:12:34.805 10:40:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:34.805 10:40:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:34.805 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:34.805 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:34.805 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:34.805 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:34.805 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:34.805 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:34.805 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:34.805 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:34.805 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.805 1+0 records in 00:12:34.805 1+0 records out 00:12:34.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273595 s, 15.0 MB/s 00:12:34.805 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.063 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:35.063 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.063 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:35.063 10:40:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:35.063 10:40:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.063 10:40:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:35.063 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:35.063 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:35.063 10:40:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:41.613 65536+0 records in 00:12:41.613 65536+0 records out 00:12:41.614 33554432 bytes (34 MB, 32 MiB) copied, 6.1952 s, 5.4 MB/s 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:41.614 [2024-11-15 10:41:02.493791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.614 [2024-11-15 10:41:02.529877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.614 "name": "raid_bdev1", 00:12:41.614 "uuid": "70535710-b36b-4e35-a644-1259fe98c3b4", 00:12:41.614 "strip_size_kb": 0, 00:12:41.614 "state": "online", 00:12:41.614 "raid_level": "raid1", 00:12:41.614 "superblock": false, 00:12:41.614 "num_base_bdevs": 2, 00:12:41.614 "num_base_bdevs_discovered": 1, 00:12:41.614 "num_base_bdevs_operational": 1, 00:12:41.614 "base_bdevs_list": [ 00:12:41.614 { 00:12:41.614 "name": null, 00:12:41.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.614 "is_configured": false, 00:12:41.614 "data_offset": 0, 00:12:41.614 "data_size": 65536 00:12:41.614 }, 00:12:41.614 { 00:12:41.614 "name": "BaseBdev2", 00:12:41.614 "uuid": "26a47a13-ab73-52e5-8243-c19ba8f4190a", 00:12:41.614 "is_configured": true, 00:12:41.614 "data_offset": 0, 00:12:41.614 "data_size": 65536 00:12:41.614 } 00:12:41.614 ] 00:12:41.614 }' 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.614 10:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.871 10:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:41.871 10:41:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.871 10:41:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.871 [2024-11-15 10:41:03.026029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:42.129 [2024-11-15 10:41:03.042544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:42.129 10:41:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.129 10:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:42.129 [2024-11-15 10:41:03.045116] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.063 "name": "raid_bdev1", 00:12:43.063 "uuid": "70535710-b36b-4e35-a644-1259fe98c3b4", 00:12:43.063 "strip_size_kb": 0, 00:12:43.063 "state": "online", 00:12:43.063 "raid_level": "raid1", 00:12:43.063 "superblock": false, 00:12:43.063 "num_base_bdevs": 2, 00:12:43.063 "num_base_bdevs_discovered": 2, 00:12:43.063 "num_base_bdevs_operational": 2, 00:12:43.063 "process": { 00:12:43.063 "type": "rebuild", 00:12:43.063 "target": "spare", 00:12:43.063 "progress": { 00:12:43.063 "blocks": 20480, 00:12:43.063 "percent": 31 00:12:43.063 } 00:12:43.063 }, 00:12:43.063 "base_bdevs_list": [ 00:12:43.063 { 00:12:43.063 "name": "spare", 00:12:43.063 "uuid": "3fe896b3-e691-56de-97c7-cd87309a6e27", 00:12:43.063 "is_configured": true, 00:12:43.063 "data_offset": 0, 00:12:43.063 "data_size": 65536 00:12:43.063 }, 00:12:43.063 { 00:12:43.063 "name": "BaseBdev2", 00:12:43.063 "uuid": "26a47a13-ab73-52e5-8243-c19ba8f4190a", 00:12:43.063 "is_configured": true, 00:12:43.063 "data_offset": 0, 00:12:43.063 "data_size": 65536 00:12:43.063 } 00:12:43.063 ] 00:12:43.063 }' 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.063 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.063 [2024-11-15 10:41:04.214585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.321 [2024-11-15 10:41:04.253865] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:43.321 [2024-11-15 10:41:04.253961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.321 [2024-11-15 10:41:04.253985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.321 [2024-11-15 10:41:04.254000] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.321 "name": "raid_bdev1", 00:12:43.321 "uuid": "70535710-b36b-4e35-a644-1259fe98c3b4", 00:12:43.321 "strip_size_kb": 0, 00:12:43.321 "state": "online", 00:12:43.321 "raid_level": "raid1", 00:12:43.321 "superblock": false, 00:12:43.321 "num_base_bdevs": 2, 00:12:43.321 "num_base_bdevs_discovered": 1, 00:12:43.321 "num_base_bdevs_operational": 1, 00:12:43.321 "base_bdevs_list": [ 00:12:43.321 { 00:12:43.321 "name": null, 00:12:43.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.321 "is_configured": false, 00:12:43.321 "data_offset": 0, 00:12:43.321 "data_size": 65536 00:12:43.321 }, 00:12:43.321 { 00:12:43.321 "name": "BaseBdev2", 00:12:43.321 "uuid": "26a47a13-ab73-52e5-8243-c19ba8f4190a", 00:12:43.321 "is_configured": true, 00:12:43.321 "data_offset": 0, 00:12:43.321 "data_size": 65536 00:12:43.321 } 00:12:43.321 ] 00:12:43.321 }' 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.321 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.886 "name": "raid_bdev1", 00:12:43.886 "uuid": "70535710-b36b-4e35-a644-1259fe98c3b4", 00:12:43.886 "strip_size_kb": 0, 00:12:43.886 "state": "online", 00:12:43.886 "raid_level": "raid1", 00:12:43.886 "superblock": false, 00:12:43.886 "num_base_bdevs": 2, 00:12:43.886 "num_base_bdevs_discovered": 1, 00:12:43.886 "num_base_bdevs_operational": 1, 00:12:43.886 "base_bdevs_list": [ 00:12:43.886 { 00:12:43.886 "name": null, 00:12:43.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.886 "is_configured": false, 00:12:43.886 "data_offset": 0, 00:12:43.886 "data_size": 65536 00:12:43.886 }, 00:12:43.886 { 00:12:43.886 "name": "BaseBdev2", 00:12:43.886 "uuid": "26a47a13-ab73-52e5-8243-c19ba8f4190a", 00:12:43.886 "is_configured": true, 00:12:43.886 "data_offset": 0, 00:12:43.886 "data_size": 65536 00:12:43.886 } 00:12:43.886 ] 00:12:43.886 }' 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.886 [2024-11-15 10:41:04.970453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.886 [2024-11-15 10:41:04.986338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.886 10:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:43.886 [2024-11-15 10:41:04.988799] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:45.254 10:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.255 10:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.255 10:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.255 10:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.255 10:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.255 10:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.255 10:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.255 10:41:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.255 10:41:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.255 "name": "raid_bdev1", 00:12:45.255 "uuid": "70535710-b36b-4e35-a644-1259fe98c3b4", 00:12:45.255 "strip_size_kb": 0, 00:12:45.255 "state": "online", 00:12:45.255 "raid_level": "raid1", 00:12:45.255 "superblock": false, 00:12:45.255 "num_base_bdevs": 2, 00:12:45.255 "num_base_bdevs_discovered": 2, 00:12:45.255 "num_base_bdevs_operational": 2, 00:12:45.255 "process": { 00:12:45.255 "type": "rebuild", 00:12:45.255 "target": "spare", 00:12:45.255 "progress": { 00:12:45.255 "blocks": 20480, 00:12:45.255 "percent": 31 00:12:45.255 } 00:12:45.255 }, 00:12:45.255 "base_bdevs_list": [ 00:12:45.255 { 00:12:45.255 "name": "spare", 00:12:45.255 "uuid": "3fe896b3-e691-56de-97c7-cd87309a6e27", 00:12:45.255 "is_configured": true, 00:12:45.255 "data_offset": 0, 00:12:45.255 "data_size": 65536 00:12:45.255 }, 00:12:45.255 { 00:12:45.255 "name": "BaseBdev2", 00:12:45.255 "uuid": "26a47a13-ab73-52e5-8243-c19ba8f4190a", 00:12:45.255 "is_configured": true, 00:12:45.255 "data_offset": 0, 00:12:45.255 "data_size": 65536 00:12:45.255 } 00:12:45.255 ] 00:12:45.255 }' 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=395 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.255 "name": "raid_bdev1", 00:12:45.255 "uuid": "70535710-b36b-4e35-a644-1259fe98c3b4", 00:12:45.255 "strip_size_kb": 0, 00:12:45.255 "state": "online", 00:12:45.255 "raid_level": "raid1", 00:12:45.255 "superblock": false, 00:12:45.255 "num_base_bdevs": 2, 00:12:45.255 "num_base_bdevs_discovered": 2, 00:12:45.255 "num_base_bdevs_operational": 2, 00:12:45.255 "process": { 00:12:45.255 "type": "rebuild", 00:12:45.255 "target": "spare", 00:12:45.255 "progress": { 00:12:45.255 "blocks": 22528, 00:12:45.255 "percent": 34 00:12:45.255 } 00:12:45.255 }, 00:12:45.255 "base_bdevs_list": [ 00:12:45.255 { 00:12:45.255 "name": "spare", 00:12:45.255 "uuid": "3fe896b3-e691-56de-97c7-cd87309a6e27", 00:12:45.255 "is_configured": true, 00:12:45.255 "data_offset": 0, 00:12:45.255 "data_size": 65536 00:12:45.255 }, 00:12:45.255 { 00:12:45.255 "name": "BaseBdev2", 00:12:45.255 "uuid": "26a47a13-ab73-52e5-8243-c19ba8f4190a", 00:12:45.255 "is_configured": true, 00:12:45.255 "data_offset": 0, 00:12:45.255 "data_size": 65536 00:12:45.255 } 00:12:45.255 ] 00:12:45.255 }' 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.255 10:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:46.188 10:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:46.188 10:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.188 10:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.188 10:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.188 10:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.188 10:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.188 10:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.188 10:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.188 10:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.188 10:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.188 10:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.446 10:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.446 "name": "raid_bdev1", 00:12:46.446 "uuid": "70535710-b36b-4e35-a644-1259fe98c3b4", 00:12:46.446 "strip_size_kb": 0, 00:12:46.446 "state": "online", 00:12:46.446 "raid_level": "raid1", 00:12:46.446 "superblock": false, 00:12:46.446 "num_base_bdevs": 2, 00:12:46.446 "num_base_bdevs_discovered": 2, 00:12:46.446 "num_base_bdevs_operational": 2, 00:12:46.446 "process": { 00:12:46.446 "type": "rebuild", 00:12:46.446 "target": "spare", 00:12:46.446 "progress": { 00:12:46.446 "blocks": 47104, 00:12:46.446 "percent": 71 00:12:46.446 } 00:12:46.446 }, 00:12:46.446 "base_bdevs_list": [ 00:12:46.446 { 00:12:46.446 "name": "spare", 00:12:46.446 "uuid": "3fe896b3-e691-56de-97c7-cd87309a6e27", 00:12:46.446 "is_configured": true, 00:12:46.446 "data_offset": 0, 00:12:46.446 "data_size": 65536 00:12:46.446 }, 00:12:46.446 { 00:12:46.446 "name": "BaseBdev2", 00:12:46.446 "uuid": "26a47a13-ab73-52e5-8243-c19ba8f4190a", 00:12:46.446 "is_configured": true, 00:12:46.446 "data_offset": 0, 00:12:46.446 "data_size": 65536 00:12:46.446 } 00:12:46.446 ] 00:12:46.446 }' 00:12:46.446 10:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.446 10:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.446 10:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.446 10:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.446 10:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:47.380 [2024-11-15 10:41:08.211031] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:47.380 [2024-11-15 10:41:08.211134] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:47.380 [2024-11-15 10:41:08.211206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.380 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:47.380 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.380 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.380 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.380 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.380 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.380 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.380 10:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.380 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.380 10:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.380 10:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.380 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.380 "name": "raid_bdev1", 00:12:47.380 "uuid": "70535710-b36b-4e35-a644-1259fe98c3b4", 00:12:47.380 "strip_size_kb": 0, 00:12:47.380 "state": "online", 00:12:47.380 "raid_level": "raid1", 00:12:47.380 "superblock": false, 00:12:47.380 "num_base_bdevs": 2, 00:12:47.380 "num_base_bdevs_discovered": 2, 00:12:47.380 "num_base_bdevs_operational": 2, 00:12:47.380 "base_bdevs_list": [ 00:12:47.380 { 00:12:47.380 "name": "spare", 00:12:47.380 "uuid": "3fe896b3-e691-56de-97c7-cd87309a6e27", 00:12:47.380 "is_configured": true, 00:12:47.380 "data_offset": 0, 00:12:47.380 "data_size": 65536 00:12:47.380 }, 00:12:47.380 { 00:12:47.380 "name": "BaseBdev2", 00:12:47.380 "uuid": "26a47a13-ab73-52e5-8243-c19ba8f4190a", 00:12:47.380 "is_configured": true, 00:12:47.380 "data_offset": 0, 00:12:47.380 "data_size": 65536 00:12:47.380 } 00:12:47.380 ] 00:12:47.380 }' 00:12:47.380 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.637 "name": "raid_bdev1", 00:12:47.637 "uuid": "70535710-b36b-4e35-a644-1259fe98c3b4", 00:12:47.637 "strip_size_kb": 0, 00:12:47.637 "state": "online", 00:12:47.637 "raid_level": "raid1", 00:12:47.637 "superblock": false, 00:12:47.637 "num_base_bdevs": 2, 00:12:47.637 "num_base_bdevs_discovered": 2, 00:12:47.637 "num_base_bdevs_operational": 2, 00:12:47.637 "base_bdevs_list": [ 00:12:47.637 { 00:12:47.637 "name": "spare", 00:12:47.637 "uuid": "3fe896b3-e691-56de-97c7-cd87309a6e27", 00:12:47.637 "is_configured": true, 00:12:47.637 "data_offset": 0, 00:12:47.637 "data_size": 65536 00:12:47.637 }, 00:12:47.637 { 00:12:47.637 "name": "BaseBdev2", 00:12:47.637 "uuid": "26a47a13-ab73-52e5-8243-c19ba8f4190a", 00:12:47.637 "is_configured": true, 00:12:47.637 "data_offset": 0, 00:12:47.637 "data_size": 65536 00:12:47.637 } 00:12:47.637 ] 00:12:47.637 }' 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:47.637 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:47.638 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.638 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.638 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.638 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.638 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:47.638 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.638 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.638 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.638 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.638 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.638 10:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.638 10:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.638 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.895 10:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.895 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.895 "name": "raid_bdev1", 00:12:47.895 "uuid": "70535710-b36b-4e35-a644-1259fe98c3b4", 00:12:47.895 "strip_size_kb": 0, 00:12:47.895 "state": "online", 00:12:47.895 "raid_level": "raid1", 00:12:47.895 "superblock": false, 00:12:47.895 "num_base_bdevs": 2, 00:12:47.895 "num_base_bdevs_discovered": 2, 00:12:47.895 "num_base_bdevs_operational": 2, 00:12:47.895 "base_bdevs_list": [ 00:12:47.895 { 00:12:47.895 "name": "spare", 00:12:47.895 "uuid": "3fe896b3-e691-56de-97c7-cd87309a6e27", 00:12:47.895 "is_configured": true, 00:12:47.895 "data_offset": 0, 00:12:47.895 "data_size": 65536 00:12:47.895 }, 00:12:47.895 { 00:12:47.895 "name": "BaseBdev2", 00:12:47.895 "uuid": "26a47a13-ab73-52e5-8243-c19ba8f4190a", 00:12:47.895 "is_configured": true, 00:12:47.895 "data_offset": 0, 00:12:47.895 "data_size": 65536 00:12:47.895 } 00:12:47.895 ] 00:12:47.895 }' 00:12:47.895 10:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.895 10:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.462 [2024-11-15 10:41:09.322657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:48.462 [2024-11-15 10:41:09.322842] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:48.462 [2024-11-15 10:41:09.322995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.462 [2024-11-15 10:41:09.323119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:48.462 [2024-11-15 10:41:09.323140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:48.462 10:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:48.777 /dev/nbd0 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.777 1+0 records in 00:12:48.777 1+0 records out 00:12:48.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350932 s, 11.7 MB/s 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:48.777 10:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:49.035 /dev/nbd1 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.035 1+0 records in 00:12:49.035 1+0 records out 00:12:49.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393767 s, 10.4 MB/s 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:49.035 10:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:49.293 10:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:49.293 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:49.293 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:49.293 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:49.293 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:49.293 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.293 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:49.551 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:49.551 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:49.551 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:49.551 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.551 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.551 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:49.551 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:49.551 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.551 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.551 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75443 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75443 ']' 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75443 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75443 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75443' 00:12:49.809 killing process with pid 75443 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75443 00:12:49.809 Received shutdown signal, test time was about 60.000000 seconds 00:12:49.809 00:12:49.809 Latency(us) 00:12:49.809 [2024-11-15T10:41:10.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.809 [2024-11-15T10:41:10.971Z] =================================================================================================================== 00:12:49.809 [2024-11-15T10:41:10.971Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:49.809 10:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75443 00:12:49.809 [2024-11-15 10:41:10.826362] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:50.067 [2024-11-15 10:41:11.081742] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:51.000 00:12:51.000 real 0m18.292s 00:12:51.000 user 0m20.880s 00:12:51.000 sys 0m3.367s 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.000 ************************************ 00:12:51.000 END TEST raid_rebuild_test 00:12:51.000 ************************************ 00:12:51.000 10:41:12 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:51.000 10:41:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:51.000 10:41:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.000 10:41:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:51.000 ************************************ 00:12:51.000 START TEST raid_rebuild_test_sb 00:12:51.000 ************************************ 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75889 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:51.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75889 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75889 ']' 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.000 10:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.257 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:51.257 Zero copy mechanism will not be used. 00:12:51.257 [2024-11-15 10:41:12.241004] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:12:51.257 [2024-11-15 10:41:12.241157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75889 ] 00:12:51.257 [2024-11-15 10:41:12.412365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.515 [2024-11-15 10:41:12.541721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.773 [2024-11-15 10:41:12.745674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.773 [2024-11-15 10:41:12.745745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.338 BaseBdev1_malloc 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.338 [2024-11-15 10:41:13.259812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:52.338 [2024-11-15 10:41:13.259897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.338 [2024-11-15 10:41:13.259933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:52.338 [2024-11-15 10:41:13.259953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.338 [2024-11-15 10:41:13.262809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.338 [2024-11-15 10:41:13.262862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:52.338 BaseBdev1 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.338 BaseBdev2_malloc 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.338 [2024-11-15 10:41:13.307389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:52.338 [2024-11-15 10:41:13.307468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.338 [2024-11-15 10:41:13.307516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:52.338 [2024-11-15 10:41:13.307541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.338 [2024-11-15 10:41:13.310312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.338 [2024-11-15 10:41:13.310362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:52.338 BaseBdev2 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.338 spare_malloc 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.338 spare_delay 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.338 [2024-11-15 10:41:13.373851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:52.338 [2024-11-15 10:41:13.374051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.338 [2024-11-15 10:41:13.374091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:52.338 [2024-11-15 10:41:13.374111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.338 [2024-11-15 10:41:13.376952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.338 [2024-11-15 10:41:13.377006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:52.338 spare 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.338 [2024-11-15 10:41:13.381969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.338 [2024-11-15 10:41:13.384323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.338 [2024-11-15 10:41:13.384690] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:52.338 [2024-11-15 10:41:13.384736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:52.338 [2024-11-15 10:41:13.385047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:52.338 [2024-11-15 10:41:13.385267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:52.338 [2024-11-15 10:41:13.385284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:52.338 [2024-11-15 10:41:13.385474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.338 "name": "raid_bdev1", 00:12:52.338 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:12:52.338 "strip_size_kb": 0, 00:12:52.338 "state": "online", 00:12:52.338 "raid_level": "raid1", 00:12:52.338 "superblock": true, 00:12:52.338 "num_base_bdevs": 2, 00:12:52.338 "num_base_bdevs_discovered": 2, 00:12:52.338 "num_base_bdevs_operational": 2, 00:12:52.338 "base_bdevs_list": [ 00:12:52.338 { 00:12:52.338 "name": "BaseBdev1", 00:12:52.338 "uuid": "e493e20a-8096-512d-b192-a64ed518bca6", 00:12:52.338 "is_configured": true, 00:12:52.338 "data_offset": 2048, 00:12:52.338 "data_size": 63488 00:12:52.338 }, 00:12:52.338 { 00:12:52.338 "name": "BaseBdev2", 00:12:52.338 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:12:52.338 "is_configured": true, 00:12:52.338 "data_offset": 2048, 00:12:52.338 "data_size": 63488 00:12:52.338 } 00:12:52.338 ] 00:12:52.338 }' 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.338 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.912 [2024-11-15 10:41:13.902440] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:52.912 10:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:53.170 [2024-11-15 10:41:14.222225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:53.170 /dev/nbd0 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.170 1+0 records in 00:12:53.170 1+0 records out 00:12:53.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290999 s, 14.1 MB/s 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:53.170 10:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:59.722 63488+0 records in 00:12:59.722 63488+0 records out 00:12:59.723 32505856 bytes (33 MB, 31 MiB) copied, 6.21833 s, 5.2 MB/s 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:59.723 [2024-11-15 10:41:20.787850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.723 [2024-11-15 10:41:20.821555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.723 "name": "raid_bdev1", 00:12:59.723 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:12:59.723 "strip_size_kb": 0, 00:12:59.723 "state": "online", 00:12:59.723 "raid_level": "raid1", 00:12:59.723 "superblock": true, 00:12:59.723 "num_base_bdevs": 2, 00:12:59.723 "num_base_bdevs_discovered": 1, 00:12:59.723 "num_base_bdevs_operational": 1, 00:12:59.723 "base_bdevs_list": [ 00:12:59.723 { 00:12:59.723 "name": null, 00:12:59.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.723 "is_configured": false, 00:12:59.723 "data_offset": 0, 00:12:59.723 "data_size": 63488 00:12:59.723 }, 00:12:59.723 { 00:12:59.723 "name": "BaseBdev2", 00:12:59.723 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:12:59.723 "is_configured": true, 00:12:59.723 "data_offset": 2048, 00:12:59.723 "data_size": 63488 00:12:59.723 } 00:12:59.723 ] 00:12:59.723 }' 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.723 10:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.289 10:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:00.289 10:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.289 10:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.289 [2024-11-15 10:41:21.321715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.289 [2024-11-15 10:41:21.338356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:00.289 10:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.289 10:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:00.289 [2024-11-15 10:41:21.340857] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:01.290 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.290 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.290 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.290 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.290 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.290 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.290 10:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.290 10:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.290 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.290 10:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.290 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.290 "name": "raid_bdev1", 00:13:01.290 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:01.290 "strip_size_kb": 0, 00:13:01.290 "state": "online", 00:13:01.290 "raid_level": "raid1", 00:13:01.290 "superblock": true, 00:13:01.290 "num_base_bdevs": 2, 00:13:01.290 "num_base_bdevs_discovered": 2, 00:13:01.290 "num_base_bdevs_operational": 2, 00:13:01.290 "process": { 00:13:01.290 "type": "rebuild", 00:13:01.290 "target": "spare", 00:13:01.290 "progress": { 00:13:01.290 "blocks": 20480, 00:13:01.290 "percent": 32 00:13:01.290 } 00:13:01.290 }, 00:13:01.290 "base_bdevs_list": [ 00:13:01.290 { 00:13:01.290 "name": "spare", 00:13:01.290 "uuid": "06b12bb6-2f85-5993-b3d3-516a50e8bf96", 00:13:01.290 "is_configured": true, 00:13:01.290 "data_offset": 2048, 00:13:01.290 "data_size": 63488 00:13:01.290 }, 00:13:01.290 { 00:13:01.290 "name": "BaseBdev2", 00:13:01.290 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:01.290 "is_configured": true, 00:13:01.290 "data_offset": 2048, 00:13:01.290 "data_size": 63488 00:13:01.290 } 00:13:01.290 ] 00:13:01.290 }' 00:13:01.290 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.549 [2024-11-15 10:41:22.506097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.549 [2024-11-15 10:41:22.549287] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:01.549 [2024-11-15 10:41:22.549591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.549 [2024-11-15 10:41:22.549737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.549 [2024-11-15 10:41:22.549797] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.549 "name": "raid_bdev1", 00:13:01.549 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:01.549 "strip_size_kb": 0, 00:13:01.549 "state": "online", 00:13:01.549 "raid_level": "raid1", 00:13:01.549 "superblock": true, 00:13:01.549 "num_base_bdevs": 2, 00:13:01.549 "num_base_bdevs_discovered": 1, 00:13:01.549 "num_base_bdevs_operational": 1, 00:13:01.549 "base_bdevs_list": [ 00:13:01.549 { 00:13:01.549 "name": null, 00:13:01.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.549 "is_configured": false, 00:13:01.549 "data_offset": 0, 00:13:01.549 "data_size": 63488 00:13:01.549 }, 00:13:01.549 { 00:13:01.549 "name": "BaseBdev2", 00:13:01.549 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:01.549 "is_configured": true, 00:13:01.549 "data_offset": 2048, 00:13:01.549 "data_size": 63488 00:13:01.549 } 00:13:01.549 ] 00:13:01.549 }' 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.549 10:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.115 10:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:02.115 10:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.115 10:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:02.115 10:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:02.115 10:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.115 10:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.115 10:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.115 10:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.115 10:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.115 10:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.115 10:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.115 "name": "raid_bdev1", 00:13:02.115 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:02.115 "strip_size_kb": 0, 00:13:02.115 "state": "online", 00:13:02.115 "raid_level": "raid1", 00:13:02.115 "superblock": true, 00:13:02.115 "num_base_bdevs": 2, 00:13:02.115 "num_base_bdevs_discovered": 1, 00:13:02.115 "num_base_bdevs_operational": 1, 00:13:02.115 "base_bdevs_list": [ 00:13:02.115 { 00:13:02.115 "name": null, 00:13:02.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.115 "is_configured": false, 00:13:02.115 "data_offset": 0, 00:13:02.115 "data_size": 63488 00:13:02.115 }, 00:13:02.115 { 00:13:02.115 "name": "BaseBdev2", 00:13:02.115 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:02.115 "is_configured": true, 00:13:02.115 "data_offset": 2048, 00:13:02.115 "data_size": 63488 00:13:02.116 } 00:13:02.116 ] 00:13:02.116 }' 00:13:02.116 10:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.116 10:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:02.116 10:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.116 10:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:02.116 10:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:02.116 10:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.116 10:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.116 [2024-11-15 10:41:23.250111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:02.116 [2024-11-15 10:41:23.265920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:02.116 10:41:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.116 10:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:02.116 [2024-11-15 10:41:23.268351] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.491 "name": "raid_bdev1", 00:13:03.491 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:03.491 "strip_size_kb": 0, 00:13:03.491 "state": "online", 00:13:03.491 "raid_level": "raid1", 00:13:03.491 "superblock": true, 00:13:03.491 "num_base_bdevs": 2, 00:13:03.491 "num_base_bdevs_discovered": 2, 00:13:03.491 "num_base_bdevs_operational": 2, 00:13:03.491 "process": { 00:13:03.491 "type": "rebuild", 00:13:03.491 "target": "spare", 00:13:03.491 "progress": { 00:13:03.491 "blocks": 20480, 00:13:03.491 "percent": 32 00:13:03.491 } 00:13:03.491 }, 00:13:03.491 "base_bdevs_list": [ 00:13:03.491 { 00:13:03.491 "name": "spare", 00:13:03.491 "uuid": "06b12bb6-2f85-5993-b3d3-516a50e8bf96", 00:13:03.491 "is_configured": true, 00:13:03.491 "data_offset": 2048, 00:13:03.491 "data_size": 63488 00:13:03.491 }, 00:13:03.491 { 00:13:03.491 "name": "BaseBdev2", 00:13:03.491 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:03.491 "is_configured": true, 00:13:03.491 "data_offset": 2048, 00:13:03.491 "data_size": 63488 00:13:03.491 } 00:13:03.491 ] 00:13:03.491 }' 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:03.491 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=413 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.491 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.492 "name": "raid_bdev1", 00:13:03.492 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:03.492 "strip_size_kb": 0, 00:13:03.492 "state": "online", 00:13:03.492 "raid_level": "raid1", 00:13:03.492 "superblock": true, 00:13:03.492 "num_base_bdevs": 2, 00:13:03.492 "num_base_bdevs_discovered": 2, 00:13:03.492 "num_base_bdevs_operational": 2, 00:13:03.492 "process": { 00:13:03.492 "type": "rebuild", 00:13:03.492 "target": "spare", 00:13:03.492 "progress": { 00:13:03.492 "blocks": 24576, 00:13:03.492 "percent": 38 00:13:03.492 } 00:13:03.492 }, 00:13:03.492 "base_bdevs_list": [ 00:13:03.492 { 00:13:03.492 "name": "spare", 00:13:03.492 "uuid": "06b12bb6-2f85-5993-b3d3-516a50e8bf96", 00:13:03.492 "is_configured": true, 00:13:03.492 "data_offset": 2048, 00:13:03.492 "data_size": 63488 00:13:03.492 }, 00:13:03.492 { 00:13:03.492 "name": "BaseBdev2", 00:13:03.492 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:03.492 "is_configured": true, 00:13:03.492 "data_offset": 2048, 00:13:03.492 "data_size": 63488 00:13:03.492 } 00:13:03.492 ] 00:13:03.492 }' 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.492 10:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.882 "name": "raid_bdev1", 00:13:04.882 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:04.882 "strip_size_kb": 0, 00:13:04.882 "state": "online", 00:13:04.882 "raid_level": "raid1", 00:13:04.882 "superblock": true, 00:13:04.882 "num_base_bdevs": 2, 00:13:04.882 "num_base_bdevs_discovered": 2, 00:13:04.882 "num_base_bdevs_operational": 2, 00:13:04.882 "process": { 00:13:04.882 "type": "rebuild", 00:13:04.882 "target": "spare", 00:13:04.882 "progress": { 00:13:04.882 "blocks": 47104, 00:13:04.882 "percent": 74 00:13:04.882 } 00:13:04.882 }, 00:13:04.882 "base_bdevs_list": [ 00:13:04.882 { 00:13:04.882 "name": "spare", 00:13:04.882 "uuid": "06b12bb6-2f85-5993-b3d3-516a50e8bf96", 00:13:04.882 "is_configured": true, 00:13:04.882 "data_offset": 2048, 00:13:04.882 "data_size": 63488 00:13:04.882 }, 00:13:04.882 { 00:13:04.882 "name": "BaseBdev2", 00:13:04.882 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:04.882 "is_configured": true, 00:13:04.882 "data_offset": 2048, 00:13:04.882 "data_size": 63488 00:13:04.882 } 00:13:04.882 ] 00:13:04.882 }' 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.882 10:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:05.450 [2024-11-15 10:41:26.390225] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:05.450 [2024-11-15 10:41:26.390348] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:05.450 [2024-11-15 10:41:26.390566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.709 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:05.709 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.709 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.709 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.709 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.709 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.709 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.709 10:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.709 10:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.709 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.709 10:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.709 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.709 "name": "raid_bdev1", 00:13:05.709 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:05.709 "strip_size_kb": 0, 00:13:05.709 "state": "online", 00:13:05.709 "raid_level": "raid1", 00:13:05.709 "superblock": true, 00:13:05.709 "num_base_bdevs": 2, 00:13:05.709 "num_base_bdevs_discovered": 2, 00:13:05.709 "num_base_bdevs_operational": 2, 00:13:05.709 "base_bdevs_list": [ 00:13:05.709 { 00:13:05.709 "name": "spare", 00:13:05.709 "uuid": "06b12bb6-2f85-5993-b3d3-516a50e8bf96", 00:13:05.709 "is_configured": true, 00:13:05.709 "data_offset": 2048, 00:13:05.709 "data_size": 63488 00:13:05.709 }, 00:13:05.709 { 00:13:05.709 "name": "BaseBdev2", 00:13:05.709 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:05.709 "is_configured": true, 00:13:05.709 "data_offset": 2048, 00:13:05.709 "data_size": 63488 00:13:05.709 } 00:13:05.709 ] 00:13:05.709 }' 00:13:05.709 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.968 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:05.968 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.968 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:05.968 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:05.968 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.968 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.968 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.968 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.968 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.968 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.968 10:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.968 10:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.968 10:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.968 10:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.968 "name": "raid_bdev1", 00:13:05.968 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:05.968 "strip_size_kb": 0, 00:13:05.968 "state": "online", 00:13:05.968 "raid_level": "raid1", 00:13:05.968 "superblock": true, 00:13:05.968 "num_base_bdevs": 2, 00:13:05.968 "num_base_bdevs_discovered": 2, 00:13:05.968 "num_base_bdevs_operational": 2, 00:13:05.968 "base_bdevs_list": [ 00:13:05.968 { 00:13:05.968 "name": "spare", 00:13:05.968 "uuid": "06b12bb6-2f85-5993-b3d3-516a50e8bf96", 00:13:05.968 "is_configured": true, 00:13:05.968 "data_offset": 2048, 00:13:05.968 "data_size": 63488 00:13:05.968 }, 00:13:05.968 { 00:13:05.968 "name": "BaseBdev2", 00:13:05.968 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:05.968 "is_configured": true, 00:13:05.968 "data_offset": 2048, 00:13:05.968 "data_size": 63488 00:13:05.968 } 00:13:05.968 ] 00:13:05.968 }' 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.968 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.227 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.227 "name": "raid_bdev1", 00:13:06.227 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:06.227 "strip_size_kb": 0, 00:13:06.227 "state": "online", 00:13:06.227 "raid_level": "raid1", 00:13:06.227 "superblock": true, 00:13:06.227 "num_base_bdevs": 2, 00:13:06.227 "num_base_bdevs_discovered": 2, 00:13:06.227 "num_base_bdevs_operational": 2, 00:13:06.227 "base_bdevs_list": [ 00:13:06.227 { 00:13:06.227 "name": "spare", 00:13:06.227 "uuid": "06b12bb6-2f85-5993-b3d3-516a50e8bf96", 00:13:06.227 "is_configured": true, 00:13:06.227 "data_offset": 2048, 00:13:06.227 "data_size": 63488 00:13:06.227 }, 00:13:06.227 { 00:13:06.227 "name": "BaseBdev2", 00:13:06.227 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:06.227 "is_configured": true, 00:13:06.227 "data_offset": 2048, 00:13:06.227 "data_size": 63488 00:13:06.227 } 00:13:06.227 ] 00:13:06.227 }' 00:13:06.227 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.227 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.486 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:06.486 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.486 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.486 [2024-11-15 10:41:27.629834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.486 [2024-11-15 10:41:27.630002] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.486 [2024-11-15 10:41:27.630209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.486 [2024-11-15 10:41:27.630427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.486 [2024-11-15 10:41:27.630592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:06.486 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.486 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.486 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.486 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:06.486 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.745 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.745 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:06.745 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:06.745 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:06.745 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:06.745 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.745 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:06.745 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:06.745 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:06.745 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:06.745 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:06.745 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:06.745 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.745 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:07.004 /dev/nbd0 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.004 1+0 records in 00:13:07.004 1+0 records out 00:13:07.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337563 s, 12.1 MB/s 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:07.004 10:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:07.358 /dev/nbd1 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.358 1+0 records in 00:13:07.358 1+0 records out 00:13:07.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352936 s, 11.6 MB/s 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.358 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:07.663 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:07.663 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:07.663 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:07.663 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.663 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.663 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:07.663 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:07.663 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.663 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.663 10:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.923 [2024-11-15 10:41:29.051141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:07.923 [2024-11-15 10:41:29.051223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.923 [2024-11-15 10:41:29.051272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:07.923 [2024-11-15 10:41:29.051296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.923 [2024-11-15 10:41:29.054981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.923 [2024-11-15 10:41:29.055040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:07.923 [2024-11-15 10:41:29.055190] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:07.923 [2024-11-15 10:41:29.055281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.923 [2024-11-15 10:41:29.055604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.923 spare 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.923 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.182 [2024-11-15 10:41:29.155772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:08.182 [2024-11-15 10:41:29.155830] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:08.182 [2024-11-15 10:41:29.156220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:08.182 [2024-11-15 10:41:29.156469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:08.182 [2024-11-15 10:41:29.156486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:08.182 [2024-11-15 10:41:29.156753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.182 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.182 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.183 "name": "raid_bdev1", 00:13:08.183 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:08.183 "strip_size_kb": 0, 00:13:08.183 "state": "online", 00:13:08.183 "raid_level": "raid1", 00:13:08.183 "superblock": true, 00:13:08.183 "num_base_bdevs": 2, 00:13:08.183 "num_base_bdevs_discovered": 2, 00:13:08.183 "num_base_bdevs_operational": 2, 00:13:08.183 "base_bdevs_list": [ 00:13:08.183 { 00:13:08.183 "name": "spare", 00:13:08.183 "uuid": "06b12bb6-2f85-5993-b3d3-516a50e8bf96", 00:13:08.183 "is_configured": true, 00:13:08.183 "data_offset": 2048, 00:13:08.183 "data_size": 63488 00:13:08.183 }, 00:13:08.183 { 00:13:08.183 "name": "BaseBdev2", 00:13:08.183 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:08.183 "is_configured": true, 00:13:08.183 "data_offset": 2048, 00:13:08.183 "data_size": 63488 00:13:08.183 } 00:13:08.183 ] 00:13:08.183 }' 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.183 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.751 "name": "raid_bdev1", 00:13:08.751 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:08.751 "strip_size_kb": 0, 00:13:08.751 "state": "online", 00:13:08.751 "raid_level": "raid1", 00:13:08.751 "superblock": true, 00:13:08.751 "num_base_bdevs": 2, 00:13:08.751 "num_base_bdevs_discovered": 2, 00:13:08.751 "num_base_bdevs_operational": 2, 00:13:08.751 "base_bdevs_list": [ 00:13:08.751 { 00:13:08.751 "name": "spare", 00:13:08.751 "uuid": "06b12bb6-2f85-5993-b3d3-516a50e8bf96", 00:13:08.751 "is_configured": true, 00:13:08.751 "data_offset": 2048, 00:13:08.751 "data_size": 63488 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "name": "BaseBdev2", 00:13:08.751 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:08.751 "is_configured": true, 00:13:08.751 "data_offset": 2048, 00:13:08.751 "data_size": 63488 00:13:08.751 } 00:13:08.751 ] 00:13:08.751 }' 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.751 [2024-11-15 10:41:29.879521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.751 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.010 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.010 "name": "raid_bdev1", 00:13:09.010 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:09.010 "strip_size_kb": 0, 00:13:09.010 "state": "online", 00:13:09.010 "raid_level": "raid1", 00:13:09.010 "superblock": true, 00:13:09.010 "num_base_bdevs": 2, 00:13:09.010 "num_base_bdevs_discovered": 1, 00:13:09.010 "num_base_bdevs_operational": 1, 00:13:09.010 "base_bdevs_list": [ 00:13:09.010 { 00:13:09.010 "name": null, 00:13:09.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.010 "is_configured": false, 00:13:09.010 "data_offset": 0, 00:13:09.010 "data_size": 63488 00:13:09.010 }, 00:13:09.010 { 00:13:09.010 "name": "BaseBdev2", 00:13:09.010 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:09.010 "is_configured": true, 00:13:09.010 "data_offset": 2048, 00:13:09.010 "data_size": 63488 00:13:09.010 } 00:13:09.010 ] 00:13:09.010 }' 00:13:09.010 10:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.010 10:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.268 10:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:09.268 10:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.268 10:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.268 [2024-11-15 10:41:30.419707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.268 [2024-11-15 10:41:30.419950] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:09.268 [2024-11-15 10:41:30.419977] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:09.268 [2024-11-15 10:41:30.420027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.527 [2024-11-15 10:41:30.435495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:09.527 10:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.527 10:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:09.527 [2024-11-15 10:41:30.437976] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.461 "name": "raid_bdev1", 00:13:10.461 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:10.461 "strip_size_kb": 0, 00:13:10.461 "state": "online", 00:13:10.461 "raid_level": "raid1", 00:13:10.461 "superblock": true, 00:13:10.461 "num_base_bdevs": 2, 00:13:10.461 "num_base_bdevs_discovered": 2, 00:13:10.461 "num_base_bdevs_operational": 2, 00:13:10.461 "process": { 00:13:10.461 "type": "rebuild", 00:13:10.461 "target": "spare", 00:13:10.461 "progress": { 00:13:10.461 "blocks": 20480, 00:13:10.461 "percent": 32 00:13:10.461 } 00:13:10.461 }, 00:13:10.461 "base_bdevs_list": [ 00:13:10.461 { 00:13:10.461 "name": "spare", 00:13:10.461 "uuid": "06b12bb6-2f85-5993-b3d3-516a50e8bf96", 00:13:10.461 "is_configured": true, 00:13:10.461 "data_offset": 2048, 00:13:10.461 "data_size": 63488 00:13:10.461 }, 00:13:10.461 { 00:13:10.461 "name": "BaseBdev2", 00:13:10.461 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:10.461 "is_configured": true, 00:13:10.461 "data_offset": 2048, 00:13:10.461 "data_size": 63488 00:13:10.461 } 00:13:10.461 ] 00:13:10.461 }' 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.461 10:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.461 [2024-11-15 10:41:31.599465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.719 [2024-11-15 10:41:31.646648] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:10.719 [2024-11-15 10:41:31.646777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.719 [2024-11-15 10:41:31.646802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.719 [2024-11-15 10:41:31.646817] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.719 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.719 "name": "raid_bdev1", 00:13:10.719 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:10.719 "strip_size_kb": 0, 00:13:10.719 "state": "online", 00:13:10.719 "raid_level": "raid1", 00:13:10.719 "superblock": true, 00:13:10.719 "num_base_bdevs": 2, 00:13:10.719 "num_base_bdevs_discovered": 1, 00:13:10.719 "num_base_bdevs_operational": 1, 00:13:10.719 "base_bdevs_list": [ 00:13:10.719 { 00:13:10.719 "name": null, 00:13:10.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.719 "is_configured": false, 00:13:10.719 "data_offset": 0, 00:13:10.720 "data_size": 63488 00:13:10.720 }, 00:13:10.720 { 00:13:10.720 "name": "BaseBdev2", 00:13:10.720 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:10.720 "is_configured": true, 00:13:10.720 "data_offset": 2048, 00:13:10.720 "data_size": 63488 00:13:10.720 } 00:13:10.720 ] 00:13:10.720 }' 00:13:10.720 10:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.720 10:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.286 10:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:11.286 10:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.286 10:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.286 [2024-11-15 10:41:32.243412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:11.286 [2024-11-15 10:41:32.243547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.286 [2024-11-15 10:41:32.243581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:11.286 [2024-11-15 10:41:32.243616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.286 [2024-11-15 10:41:32.244222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.286 [2024-11-15 10:41:32.244276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:11.286 [2024-11-15 10:41:32.244412] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:11.286 [2024-11-15 10:41:32.244437] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:11.286 [2024-11-15 10:41:32.244451] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:11.286 [2024-11-15 10:41:32.244521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.286 [2024-11-15 10:41:32.260039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:11.286 spare 00:13:11.286 10:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.286 10:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:11.286 [2024-11-15 10:41:32.262546] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:12.222 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.222 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.222 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.222 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.222 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.222 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.222 10:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.222 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.222 10:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.222 10:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.222 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.222 "name": "raid_bdev1", 00:13:12.222 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:12.222 "strip_size_kb": 0, 00:13:12.222 "state": "online", 00:13:12.222 "raid_level": "raid1", 00:13:12.222 "superblock": true, 00:13:12.222 "num_base_bdevs": 2, 00:13:12.222 "num_base_bdevs_discovered": 2, 00:13:12.222 "num_base_bdevs_operational": 2, 00:13:12.222 "process": { 00:13:12.222 "type": "rebuild", 00:13:12.222 "target": "spare", 00:13:12.222 "progress": { 00:13:12.222 "blocks": 20480, 00:13:12.222 "percent": 32 00:13:12.222 } 00:13:12.222 }, 00:13:12.222 "base_bdevs_list": [ 00:13:12.222 { 00:13:12.222 "name": "spare", 00:13:12.222 "uuid": "06b12bb6-2f85-5993-b3d3-516a50e8bf96", 00:13:12.222 "is_configured": true, 00:13:12.222 "data_offset": 2048, 00:13:12.222 "data_size": 63488 00:13:12.222 }, 00:13:12.222 { 00:13:12.222 "name": "BaseBdev2", 00:13:12.222 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:12.222 "is_configured": true, 00:13:12.222 "data_offset": 2048, 00:13:12.222 "data_size": 63488 00:13:12.222 } 00:13:12.222 ] 00:13:12.222 }' 00:13:12.222 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.222 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.222 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.521 [2024-11-15 10:41:33.416182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.521 [2024-11-15 10:41:33.471385] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:12.521 [2024-11-15 10:41:33.471661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.521 [2024-11-15 10:41:33.471800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.521 [2024-11-15 10:41:33.471851] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.521 "name": "raid_bdev1", 00:13:12.521 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:12.521 "strip_size_kb": 0, 00:13:12.521 "state": "online", 00:13:12.521 "raid_level": "raid1", 00:13:12.521 "superblock": true, 00:13:12.521 "num_base_bdevs": 2, 00:13:12.521 "num_base_bdevs_discovered": 1, 00:13:12.521 "num_base_bdevs_operational": 1, 00:13:12.521 "base_bdevs_list": [ 00:13:12.521 { 00:13:12.521 "name": null, 00:13:12.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.521 "is_configured": false, 00:13:12.521 "data_offset": 0, 00:13:12.521 "data_size": 63488 00:13:12.521 }, 00:13:12.521 { 00:13:12.521 "name": "BaseBdev2", 00:13:12.521 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:12.521 "is_configured": true, 00:13:12.521 "data_offset": 2048, 00:13:12.521 "data_size": 63488 00:13:12.521 } 00:13:12.521 ] 00:13:12.521 }' 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.521 10:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.103 "name": "raid_bdev1", 00:13:13.103 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:13.103 "strip_size_kb": 0, 00:13:13.103 "state": "online", 00:13:13.103 "raid_level": "raid1", 00:13:13.103 "superblock": true, 00:13:13.103 "num_base_bdevs": 2, 00:13:13.103 "num_base_bdevs_discovered": 1, 00:13:13.103 "num_base_bdevs_operational": 1, 00:13:13.103 "base_bdevs_list": [ 00:13:13.103 { 00:13:13.103 "name": null, 00:13:13.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.103 "is_configured": false, 00:13:13.103 "data_offset": 0, 00:13:13.103 "data_size": 63488 00:13:13.103 }, 00:13:13.103 { 00:13:13.103 "name": "BaseBdev2", 00:13:13.103 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:13.103 "is_configured": true, 00:13:13.103 "data_offset": 2048, 00:13:13.103 "data_size": 63488 00:13:13.103 } 00:13:13.103 ] 00:13:13.103 }' 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.103 [2024-11-15 10:41:34.140948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:13.103 [2024-11-15 10:41:34.141013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.103 [2024-11-15 10:41:34.141048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:13.103 [2024-11-15 10:41:34.141098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.103 [2024-11-15 10:41:34.141683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.103 [2024-11-15 10:41:34.141733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:13.103 [2024-11-15 10:41:34.141838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:13.103 [2024-11-15 10:41:34.141867] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:13.103 [2024-11-15 10:41:34.141883] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:13.103 [2024-11-15 10:41:34.141896] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:13.103 BaseBdev1 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.103 10:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:14.039 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:14.039 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.039 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.039 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.039 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.039 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:14.039 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.039 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.039 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.039 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.039 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.040 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.040 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.040 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.040 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.298 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.298 "name": "raid_bdev1", 00:13:14.298 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:14.298 "strip_size_kb": 0, 00:13:14.298 "state": "online", 00:13:14.298 "raid_level": "raid1", 00:13:14.298 "superblock": true, 00:13:14.298 "num_base_bdevs": 2, 00:13:14.298 "num_base_bdevs_discovered": 1, 00:13:14.298 "num_base_bdevs_operational": 1, 00:13:14.298 "base_bdevs_list": [ 00:13:14.298 { 00:13:14.298 "name": null, 00:13:14.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.298 "is_configured": false, 00:13:14.298 "data_offset": 0, 00:13:14.298 "data_size": 63488 00:13:14.298 }, 00:13:14.298 { 00:13:14.298 "name": "BaseBdev2", 00:13:14.298 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:14.298 "is_configured": true, 00:13:14.298 "data_offset": 2048, 00:13:14.298 "data_size": 63488 00:13:14.298 } 00:13:14.298 ] 00:13:14.298 }' 00:13:14.298 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.298 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.556 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.556 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.556 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.556 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.556 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.556 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.556 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.556 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.556 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.556 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.556 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.556 "name": "raid_bdev1", 00:13:14.556 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:14.556 "strip_size_kb": 0, 00:13:14.556 "state": "online", 00:13:14.556 "raid_level": "raid1", 00:13:14.556 "superblock": true, 00:13:14.556 "num_base_bdevs": 2, 00:13:14.556 "num_base_bdevs_discovered": 1, 00:13:14.556 "num_base_bdevs_operational": 1, 00:13:14.556 "base_bdevs_list": [ 00:13:14.556 { 00:13:14.556 "name": null, 00:13:14.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.556 "is_configured": false, 00:13:14.556 "data_offset": 0, 00:13:14.556 "data_size": 63488 00:13:14.556 }, 00:13:14.556 { 00:13:14.556 "name": "BaseBdev2", 00:13:14.556 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:14.556 "is_configured": true, 00:13:14.556 "data_offset": 2048, 00:13:14.556 "data_size": 63488 00:13:14.556 } 00:13:14.556 ] 00:13:14.556 }' 00:13:14.556 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.814 [2024-11-15 10:41:35.825424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.814 [2024-11-15 10:41:35.825760] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:14.814 [2024-11-15 10:41:35.825944] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:14.814 request: 00:13:14.814 { 00:13:14.814 "base_bdev": "BaseBdev1", 00:13:14.814 "raid_bdev": "raid_bdev1", 00:13:14.814 "method": "bdev_raid_add_base_bdev", 00:13:14.814 "req_id": 1 00:13:14.814 } 00:13:14.814 Got JSON-RPC error response 00:13:14.814 response: 00:13:14.814 { 00:13:14.814 "code": -22, 00:13:14.814 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:14.814 } 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:14.814 10:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.746 "name": "raid_bdev1", 00:13:15.746 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:15.746 "strip_size_kb": 0, 00:13:15.746 "state": "online", 00:13:15.746 "raid_level": "raid1", 00:13:15.746 "superblock": true, 00:13:15.746 "num_base_bdevs": 2, 00:13:15.746 "num_base_bdevs_discovered": 1, 00:13:15.746 "num_base_bdevs_operational": 1, 00:13:15.746 "base_bdevs_list": [ 00:13:15.746 { 00:13:15.746 "name": null, 00:13:15.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.746 "is_configured": false, 00:13:15.746 "data_offset": 0, 00:13:15.746 "data_size": 63488 00:13:15.746 }, 00:13:15.746 { 00:13:15.746 "name": "BaseBdev2", 00:13:15.746 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:15.746 "is_configured": true, 00:13:15.746 "data_offset": 2048, 00:13:15.746 "data_size": 63488 00:13:15.746 } 00:13:15.746 ] 00:13:15.746 }' 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.746 10:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.312 10:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.312 10:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.312 10:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.312 10:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.312 10:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.312 10:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.312 10:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.312 10:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.312 10:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.312 10:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.312 10:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.312 "name": "raid_bdev1", 00:13:16.312 "uuid": "4789ab8a-d5af-4f59-affc-eccebd7f0824", 00:13:16.312 "strip_size_kb": 0, 00:13:16.312 "state": "online", 00:13:16.312 "raid_level": "raid1", 00:13:16.312 "superblock": true, 00:13:16.312 "num_base_bdevs": 2, 00:13:16.312 "num_base_bdevs_discovered": 1, 00:13:16.312 "num_base_bdevs_operational": 1, 00:13:16.312 "base_bdevs_list": [ 00:13:16.312 { 00:13:16.312 "name": null, 00:13:16.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.312 "is_configured": false, 00:13:16.312 "data_offset": 0, 00:13:16.312 "data_size": 63488 00:13:16.312 }, 00:13:16.312 { 00:13:16.312 "name": "BaseBdev2", 00:13:16.312 "uuid": "0787a373-9690-5c22-9daf-af5f605156a7", 00:13:16.312 "is_configured": true, 00:13:16.312 "data_offset": 2048, 00:13:16.312 "data_size": 63488 00:13:16.312 } 00:13:16.312 ] 00:13:16.312 }' 00:13:16.312 10:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.312 10:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.312 10:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.570 10:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.570 10:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75889 00:13:16.570 10:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75889 ']' 00:13:16.570 10:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75889 00:13:16.570 10:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:16.570 10:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.570 10:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75889 00:13:16.570 killing process with pid 75889 00:13:16.570 Received shutdown signal, test time was about 60.000000 seconds 00:13:16.570 00:13:16.570 Latency(us) 00:13:16.570 [2024-11-15T10:41:37.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.570 [2024-11-15T10:41:37.732Z] =================================================================================================================== 00:13:16.570 [2024-11-15T10:41:37.732Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:16.570 10:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.570 10:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.570 10:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75889' 00:13:16.570 10:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75889 00:13:16.570 [2024-11-15 10:41:37.525561] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:16.570 10:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75889 00:13:16.570 [2024-11-15 10:41:37.525722] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.570 [2024-11-15 10:41:37.525806] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.570 [2024-11-15 10:41:37.525828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:16.828 [2024-11-15 10:41:37.788463] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:17.764 00:13:17.764 real 0m26.677s 00:13:17.764 user 0m32.792s 00:13:17.764 sys 0m3.986s 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.764 ************************************ 00:13:17.764 END TEST raid_rebuild_test_sb 00:13:17.764 ************************************ 00:13:17.764 10:41:38 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:17.764 10:41:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:17.764 10:41:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.764 10:41:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:17.764 ************************************ 00:13:17.764 START TEST raid_rebuild_test_io 00:13:17.764 ************************************ 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:17.764 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76654 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76654 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76654 ']' 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.765 10:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.023 [2024-11-15 10:41:39.001786] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:13:18.023 [2024-11-15 10:41:39.002190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76654 ] 00:13:18.023 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:18.023 Zero copy mechanism will not be used. 00:13:18.281 [2024-11-15 10:41:39.182624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.281 [2024-11-15 10:41:39.314678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.539 [2024-11-15 10:41:39.524347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.539 [2024-11-15 10:41:39.524626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.106 10:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.106 10:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:19.106 10:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.106 10:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:19.106 10:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.106 10:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.106 BaseBdev1_malloc 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.106 [2024-11-15 10:41:40.011228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:19.106 [2024-11-15 10:41:40.011441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.106 [2024-11-15 10:41:40.011484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:19.106 [2024-11-15 10:41:40.011529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.106 [2024-11-15 10:41:40.014341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.106 [2024-11-15 10:41:40.014407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:19.106 BaseBdev1 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.106 BaseBdev2_malloc 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.106 [2024-11-15 10:41:40.067965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:19.106 [2024-11-15 10:41:40.068171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.106 [2024-11-15 10:41:40.068209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:19.106 [2024-11-15 10:41:40.068230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.106 [2024-11-15 10:41:40.070998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.106 [2024-11-15 10:41:40.071048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:19.106 BaseBdev2 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.106 spare_malloc 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.106 spare_delay 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.106 [2024-11-15 10:41:40.139918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:19.106 [2024-11-15 10:41:40.140024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.106 [2024-11-15 10:41:40.140054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:19.106 [2024-11-15 10:41:40.140072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.106 [2024-11-15 10:41:40.142916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.106 [2024-11-15 10:41:40.142967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:19.106 spare 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.106 [2024-11-15 10:41:40.147980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.106 [2024-11-15 10:41:40.150509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.106 [2024-11-15 10:41:40.150648] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:19.106 [2024-11-15 10:41:40.150671] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:19.106 [2024-11-15 10:41:40.151003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:19.106 [2024-11-15 10:41:40.151212] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:19.106 [2024-11-15 10:41:40.151232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:19.106 [2024-11-15 10:41:40.151424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.106 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.106 "name": "raid_bdev1", 00:13:19.106 "uuid": "9e960082-b4ce-4933-8a51-b45a75c63f32", 00:13:19.106 "strip_size_kb": 0, 00:13:19.106 "state": "online", 00:13:19.106 "raid_level": "raid1", 00:13:19.106 "superblock": false, 00:13:19.106 "num_base_bdevs": 2, 00:13:19.106 "num_base_bdevs_discovered": 2, 00:13:19.106 "num_base_bdevs_operational": 2, 00:13:19.106 "base_bdevs_list": [ 00:13:19.106 { 00:13:19.106 "name": "BaseBdev1", 00:13:19.106 "uuid": "52952271-8559-54c5-957f-63a46cd72b7a", 00:13:19.107 "is_configured": true, 00:13:19.107 "data_offset": 0, 00:13:19.107 "data_size": 65536 00:13:19.107 }, 00:13:19.107 { 00:13:19.107 "name": "BaseBdev2", 00:13:19.107 "uuid": "f09ee44a-f54d-5f08-a4f0-3a36e7f797fc", 00:13:19.107 "is_configured": true, 00:13:19.107 "data_offset": 0, 00:13:19.107 "data_size": 65536 00:13:19.107 } 00:13:19.107 ] 00:13:19.107 }' 00:13:19.107 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.107 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.672 [2024-11-15 10:41:40.640619] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.672 [2024-11-15 10:41:40.748231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.672 "name": "raid_bdev1", 00:13:19.672 "uuid": "9e960082-b4ce-4933-8a51-b45a75c63f32", 00:13:19.672 "strip_size_kb": 0, 00:13:19.672 "state": "online", 00:13:19.672 "raid_level": "raid1", 00:13:19.672 "superblock": false, 00:13:19.672 "num_base_bdevs": 2, 00:13:19.672 "num_base_bdevs_discovered": 1, 00:13:19.672 "num_base_bdevs_operational": 1, 00:13:19.672 "base_bdevs_list": [ 00:13:19.672 { 00:13:19.672 "name": null, 00:13:19.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.672 "is_configured": false, 00:13:19.672 "data_offset": 0, 00:13:19.672 "data_size": 65536 00:13:19.672 }, 00:13:19.672 { 00:13:19.672 "name": "BaseBdev2", 00:13:19.672 "uuid": "f09ee44a-f54d-5f08-a4f0-3a36e7f797fc", 00:13:19.672 "is_configured": true, 00:13:19.672 "data_offset": 0, 00:13:19.672 "data_size": 65536 00:13:19.672 } 00:13:19.672 ] 00:13:19.672 }' 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.672 10:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.930 [2024-11-15 10:41:40.856369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:19.930 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:19.930 Zero copy mechanism will not be used. 00:13:19.930 Running I/O for 60 seconds... 00:13:20.188 10:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:20.188 10:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.188 10:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.188 [2024-11-15 10:41:41.247913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.188 10:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.188 10:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:20.188 [2024-11-15 10:41:41.322807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:20.188 [2024-11-15 10:41:41.325338] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:20.445 [2024-11-15 10:41:41.442009] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:20.445 [2024-11-15 10:41:41.576976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:20.445 [2024-11-15 10:41:41.577569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:20.704 [2024-11-15 10:41:41.825099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:20.962 192.00 IOPS, 576.00 MiB/s [2024-11-15T10:41:42.124Z] [2024-11-15 10:41:41.943172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:20.962 [2024-11-15 10:41:41.943576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:21.223 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.223 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.223 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.223 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.223 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.223 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.223 10:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.223 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.223 10:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.223 [2024-11-15 10:41:42.308252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:21.223 10:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.223 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.223 "name": "raid_bdev1", 00:13:21.223 "uuid": "9e960082-b4ce-4933-8a51-b45a75c63f32", 00:13:21.223 "strip_size_kb": 0, 00:13:21.223 "state": "online", 00:13:21.223 "raid_level": "raid1", 00:13:21.223 "superblock": false, 00:13:21.223 "num_base_bdevs": 2, 00:13:21.223 "num_base_bdevs_discovered": 2, 00:13:21.223 "num_base_bdevs_operational": 2, 00:13:21.223 "process": { 00:13:21.223 "type": "rebuild", 00:13:21.223 "target": "spare", 00:13:21.223 "progress": { 00:13:21.223 "blocks": 14336, 00:13:21.223 "percent": 21 00:13:21.223 } 00:13:21.223 }, 00:13:21.223 "base_bdevs_list": [ 00:13:21.223 { 00:13:21.223 "name": "spare", 00:13:21.223 "uuid": "bf8a7bad-1b56-53cc-85f5-560ab9afad66", 00:13:21.223 "is_configured": true, 00:13:21.223 "data_offset": 0, 00:13:21.223 "data_size": 65536 00:13:21.223 }, 00:13:21.223 { 00:13:21.223 "name": "BaseBdev2", 00:13:21.223 "uuid": "f09ee44a-f54d-5f08-a4f0-3a36e7f797fc", 00:13:21.223 "is_configured": true, 00:13:21.223 "data_offset": 0, 00:13:21.223 "data_size": 65536 00:13:21.223 } 00:13:21.223 ] 00:13:21.223 }' 00:13:21.223 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.482 [2024-11-15 10:41:42.452696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.482 [2024-11-15 10:41:42.528246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:21.482 [2024-11-15 10:41:42.528801] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:21.482 [2024-11-15 10:41:42.538019] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:21.482 [2024-11-15 10:41:42.548086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.482 [2024-11-15 10:41:42.548279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.482 [2024-11-15 10:41:42.548334] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:21.482 [2024-11-15 10:41:42.591801] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.482 10:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.740 10:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.741 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.741 "name": "raid_bdev1", 00:13:21.741 "uuid": "9e960082-b4ce-4933-8a51-b45a75c63f32", 00:13:21.741 "strip_size_kb": 0, 00:13:21.741 "state": "online", 00:13:21.741 "raid_level": "raid1", 00:13:21.741 "superblock": false, 00:13:21.741 "num_base_bdevs": 2, 00:13:21.741 "num_base_bdevs_discovered": 1, 00:13:21.741 "num_base_bdevs_operational": 1, 00:13:21.741 "base_bdevs_list": [ 00:13:21.741 { 00:13:21.741 "name": null, 00:13:21.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.741 "is_configured": false, 00:13:21.741 "data_offset": 0, 00:13:21.741 "data_size": 65536 00:13:21.741 }, 00:13:21.741 { 00:13:21.741 "name": "BaseBdev2", 00:13:21.741 "uuid": "f09ee44a-f54d-5f08-a4f0-3a36e7f797fc", 00:13:21.741 "is_configured": true, 00:13:21.741 "data_offset": 0, 00:13:21.741 "data_size": 65536 00:13:21.741 } 00:13:21.741 ] 00:13:21.741 }' 00:13:21.741 10:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.741 10:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.000 146.00 IOPS, 438.00 MiB/s [2024-11-15T10:41:43.162Z] 10:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.000 10:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.000 10:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.000 10:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.000 10:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.000 10:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.000 10:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.000 10:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.000 10:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.259 10:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.259 10:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.259 "name": "raid_bdev1", 00:13:22.259 "uuid": "9e960082-b4ce-4933-8a51-b45a75c63f32", 00:13:22.259 "strip_size_kb": 0, 00:13:22.259 "state": "online", 00:13:22.259 "raid_level": "raid1", 00:13:22.259 "superblock": false, 00:13:22.259 "num_base_bdevs": 2, 00:13:22.259 "num_base_bdevs_discovered": 1, 00:13:22.259 "num_base_bdevs_operational": 1, 00:13:22.259 "base_bdevs_list": [ 00:13:22.259 { 00:13:22.259 "name": null, 00:13:22.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.259 "is_configured": false, 00:13:22.259 "data_offset": 0, 00:13:22.259 "data_size": 65536 00:13:22.259 }, 00:13:22.259 { 00:13:22.259 "name": "BaseBdev2", 00:13:22.259 "uuid": "f09ee44a-f54d-5f08-a4f0-3a36e7f797fc", 00:13:22.259 "is_configured": true, 00:13:22.259 "data_offset": 0, 00:13:22.259 "data_size": 65536 00:13:22.259 } 00:13:22.259 ] 00:13:22.259 }' 00:13:22.259 10:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.259 10:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.259 10:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.259 10:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.259 10:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:22.259 10:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.259 10:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.259 [2024-11-15 10:41:43.304114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.259 10:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.259 10:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:22.259 [2024-11-15 10:41:43.355571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:22.259 [2024-11-15 10:41:43.358195] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.518 [2024-11-15 10:41:43.468056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:22.518 [2024-11-15 10:41:43.468726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:22.776 [2024-11-15 10:41:43.695002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:22.776 [2024-11-15 10:41:43.695635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:23.034 155.00 IOPS, 465.00 MiB/s [2024-11-15T10:41:44.196Z] [2024-11-15 10:41:44.024389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:23.034 [2024-11-15 10:41:44.025204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:23.291 [2024-11-15 10:41:44.263548] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:23.291 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.291 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.291 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.291 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.291 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.291 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.291 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.291 10:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.291 10:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.291 10:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.291 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.291 "name": "raid_bdev1", 00:13:23.291 "uuid": "9e960082-b4ce-4933-8a51-b45a75c63f32", 00:13:23.291 "strip_size_kb": 0, 00:13:23.291 "state": "online", 00:13:23.291 "raid_level": "raid1", 00:13:23.291 "superblock": false, 00:13:23.291 "num_base_bdevs": 2, 00:13:23.291 "num_base_bdevs_discovered": 2, 00:13:23.291 "num_base_bdevs_operational": 2, 00:13:23.291 "process": { 00:13:23.291 "type": "rebuild", 00:13:23.291 "target": "spare", 00:13:23.291 "progress": { 00:13:23.291 "blocks": 10240, 00:13:23.291 "percent": 15 00:13:23.291 } 00:13:23.291 }, 00:13:23.291 "base_bdevs_list": [ 00:13:23.291 { 00:13:23.291 "name": "spare", 00:13:23.291 "uuid": "bf8a7bad-1b56-53cc-85f5-560ab9afad66", 00:13:23.291 "is_configured": true, 00:13:23.291 "data_offset": 0, 00:13:23.291 "data_size": 65536 00:13:23.291 }, 00:13:23.291 { 00:13:23.291 "name": "BaseBdev2", 00:13:23.291 "uuid": "f09ee44a-f54d-5f08-a4f0-3a36e7f797fc", 00:13:23.291 "is_configured": true, 00:13:23.291 "data_offset": 0, 00:13:23.291 "data_size": 65536 00:13:23.291 } 00:13:23.291 ] 00:13:23.291 }' 00:13:23.291 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.291 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.291 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=433 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.548 [2024-11-15 10:41:44.494660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.548 [2024-11-15 10:41:44.495462] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.548 "name": "raid_bdev1", 00:13:23.548 "uuid": "9e960082-b4ce-4933-8a51-b45a75c63f32", 00:13:23.548 "strip_size_kb": 0, 00:13:23.548 "state": "online", 00:13:23.548 "raid_level": "raid1", 00:13:23.548 "superblock": false, 00:13:23.548 "num_base_bdevs": 2, 00:13:23.548 "num_base_bdevs_discovered": 2, 00:13:23.548 "num_base_bdevs_operational": 2, 00:13:23.548 "process": { 00:13:23.548 "type": "rebuild", 00:13:23.548 "target": "spare", 00:13:23.548 "progress": { 00:13:23.548 "blocks": 14336, 00:13:23.548 "percent": 21 00:13:23.548 } 00:13:23.548 }, 00:13:23.548 "base_bdevs_list": [ 00:13:23.548 { 00:13:23.548 "name": "spare", 00:13:23.548 "uuid": "bf8a7bad-1b56-53cc-85f5-560ab9afad66", 00:13:23.548 "is_configured": true, 00:13:23.548 "data_offset": 0, 00:13:23.548 "data_size": 65536 00:13:23.548 }, 00:13:23.548 { 00:13:23.548 "name": "BaseBdev2", 00:13:23.548 "uuid": "f09ee44a-f54d-5f08-a4f0-3a36e7f797fc", 00:13:23.548 "is_configured": true, 00:13:23.548 "data_offset": 0, 00:13:23.548 "data_size": 65536 00:13:23.548 } 00:13:23.548 ] 00:13:23.548 }' 00:13:23.548 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.549 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.549 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.549 [2024-11-15 10:41:44.616582] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:23.549 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.549 10:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:23.806 [2024-11-15 10:41:44.844051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:24.064 127.50 IOPS, 382.50 MiB/s [2024-11-15T10:41:45.226Z] [2024-11-15 10:41:45.063859] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:24.064 [2024-11-15 10:41:45.064514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:24.322 [2024-11-15 10:41:45.278781] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:24.322 [2024-11-15 10:41:45.398087] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:24.580 10:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.580 10:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.580 10:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.580 10:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.580 10:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.580 10:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.580 10:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.580 10:41:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.580 10:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.580 10:41:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.580 10:41:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.580 10:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.580 "name": "raid_bdev1", 00:13:24.580 "uuid": "9e960082-b4ce-4933-8a51-b45a75c63f32", 00:13:24.580 "strip_size_kb": 0, 00:13:24.580 "state": "online", 00:13:24.580 "raid_level": "raid1", 00:13:24.580 "superblock": false, 00:13:24.580 "num_base_bdevs": 2, 00:13:24.580 "num_base_bdevs_discovered": 2, 00:13:24.580 "num_base_bdevs_operational": 2, 00:13:24.580 "process": { 00:13:24.580 "type": "rebuild", 00:13:24.580 "target": "spare", 00:13:24.580 "progress": { 00:13:24.580 "blocks": 30720, 00:13:24.580 "percent": 46 00:13:24.580 } 00:13:24.580 }, 00:13:24.580 "base_bdevs_list": [ 00:13:24.580 { 00:13:24.580 "name": "spare", 00:13:24.580 "uuid": "bf8a7bad-1b56-53cc-85f5-560ab9afad66", 00:13:24.580 "is_configured": true, 00:13:24.580 "data_offset": 0, 00:13:24.580 "data_size": 65536 00:13:24.580 }, 00:13:24.580 { 00:13:24.580 "name": "BaseBdev2", 00:13:24.580 "uuid": "f09ee44a-f54d-5f08-a4f0-3a36e7f797fc", 00:13:24.580 "is_configured": true, 00:13:24.580 "data_offset": 0, 00:13:24.580 "data_size": 65536 00:13:24.580 } 00:13:24.580 ] 00:13:24.580 }' 00:13:24.580 10:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.580 [2024-11-15 10:41:45.721505] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:24.838 10:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.838 10:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.838 10:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.838 10:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.810 112.40 IOPS, 337.20 MiB/s [2024-11-15T10:41:46.972Z] 10:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.810 10:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.810 10:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.810 10:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.810 10:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.810 10:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.810 10:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.810 10:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.810 10:41:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.810 10:41:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.810 101.50 IOPS, 304.50 MiB/s [2024-11-15T10:41:46.972Z] 10:41:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.810 10:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.810 "name": "raid_bdev1", 00:13:25.810 "uuid": "9e960082-b4ce-4933-8a51-b45a75c63f32", 00:13:25.810 "strip_size_kb": 0, 00:13:25.810 "state": "online", 00:13:25.810 "raid_level": "raid1", 00:13:25.810 "superblock": false, 00:13:25.810 "num_base_bdevs": 2, 00:13:25.810 "num_base_bdevs_discovered": 2, 00:13:25.810 "num_base_bdevs_operational": 2, 00:13:25.810 "process": { 00:13:25.810 "type": "rebuild", 00:13:25.810 "target": "spare", 00:13:25.810 "progress": { 00:13:25.810 "blocks": 49152, 00:13:25.810 "percent": 75 00:13:25.810 } 00:13:25.810 }, 00:13:25.810 "base_bdevs_list": [ 00:13:25.810 { 00:13:25.810 "name": "spare", 00:13:25.810 "uuid": "bf8a7bad-1b56-53cc-85f5-560ab9afad66", 00:13:25.810 "is_configured": true, 00:13:25.810 "data_offset": 0, 00:13:25.810 "data_size": 65536 00:13:25.810 }, 00:13:25.810 { 00:13:25.810 "name": "BaseBdev2", 00:13:25.810 "uuid": "f09ee44a-f54d-5f08-a4f0-3a36e7f797fc", 00:13:25.810 "is_configured": true, 00:13:25.810 "data_offset": 0, 00:13:25.810 "data_size": 65536 00:13:25.810 } 00:13:25.810 ] 00:13:25.810 }' 00:13:25.810 10:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.810 10:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.810 10:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.070 10:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.070 10:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.070 [2024-11-15 10:41:47.167397] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:26.070 [2024-11-15 10:41:47.167975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:26.329 [2024-11-15 10:41:47.387447] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:26.587 [2024-11-15 10:41:47.730996] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:26.846 [2024-11-15 10:41:47.831066] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:26.846 [2024-11-15 10:41:47.833538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.105 93.14 IOPS, 279.43 MiB/s [2024-11-15T10:41:48.267Z] 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.105 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.105 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.105 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.105 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.105 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.105 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.105 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.105 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.105 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.105 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.105 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.105 "name": "raid_bdev1", 00:13:27.105 "uuid": "9e960082-b4ce-4933-8a51-b45a75c63f32", 00:13:27.105 "strip_size_kb": 0, 00:13:27.105 "state": "online", 00:13:27.105 "raid_level": "raid1", 00:13:27.105 "superblock": false, 00:13:27.105 "num_base_bdevs": 2, 00:13:27.105 "num_base_bdevs_discovered": 2, 00:13:27.105 "num_base_bdevs_operational": 2, 00:13:27.105 "base_bdevs_list": [ 00:13:27.105 { 00:13:27.105 "name": "spare", 00:13:27.105 "uuid": "bf8a7bad-1b56-53cc-85f5-560ab9afad66", 00:13:27.105 "is_configured": true, 00:13:27.106 "data_offset": 0, 00:13:27.106 "data_size": 65536 00:13:27.106 }, 00:13:27.106 { 00:13:27.106 "name": "BaseBdev2", 00:13:27.106 "uuid": "f09ee44a-f54d-5f08-a4f0-3a36e7f797fc", 00:13:27.106 "is_configured": true, 00:13:27.106 "data_offset": 0, 00:13:27.106 "data_size": 65536 00:13:27.106 } 00:13:27.106 ] 00:13:27.106 }' 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.106 "name": "raid_bdev1", 00:13:27.106 "uuid": "9e960082-b4ce-4933-8a51-b45a75c63f32", 00:13:27.106 "strip_size_kb": 0, 00:13:27.106 "state": "online", 00:13:27.106 "raid_level": "raid1", 00:13:27.106 "superblock": false, 00:13:27.106 "num_base_bdevs": 2, 00:13:27.106 "num_base_bdevs_discovered": 2, 00:13:27.106 "num_base_bdevs_operational": 2, 00:13:27.106 "base_bdevs_list": [ 00:13:27.106 { 00:13:27.106 "name": "spare", 00:13:27.106 "uuid": "bf8a7bad-1b56-53cc-85f5-560ab9afad66", 00:13:27.106 "is_configured": true, 00:13:27.106 "data_offset": 0, 00:13:27.106 "data_size": 65536 00:13:27.106 }, 00:13:27.106 { 00:13:27.106 "name": "BaseBdev2", 00:13:27.106 "uuid": "f09ee44a-f54d-5f08-a4f0-3a36e7f797fc", 00:13:27.106 "is_configured": true, 00:13:27.106 "data_offset": 0, 00:13:27.106 "data_size": 65536 00:13:27.106 } 00:13:27.106 ] 00:13:27.106 }' 00:13:27.106 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.364 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.364 "name": "raid_bdev1", 00:13:27.364 "uuid": "9e960082-b4ce-4933-8a51-b45a75c63f32", 00:13:27.364 "strip_size_kb": 0, 00:13:27.364 "state": "online", 00:13:27.364 "raid_level": "raid1", 00:13:27.364 "superblock": false, 00:13:27.364 "num_base_bdevs": 2, 00:13:27.364 "num_base_bdevs_discovered": 2, 00:13:27.364 "num_base_bdevs_operational": 2, 00:13:27.364 "base_bdevs_list": [ 00:13:27.364 { 00:13:27.364 "name": "spare", 00:13:27.364 "uuid": "bf8a7bad-1b56-53cc-85f5-560ab9afad66", 00:13:27.364 "is_configured": true, 00:13:27.364 "data_offset": 0, 00:13:27.364 "data_size": 65536 00:13:27.364 }, 00:13:27.364 { 00:13:27.364 "name": "BaseBdev2", 00:13:27.364 "uuid": "f09ee44a-f54d-5f08-a4f0-3a36e7f797fc", 00:13:27.365 "is_configured": true, 00:13:27.365 "data_offset": 0, 00:13:27.365 "data_size": 65536 00:13:27.365 } 00:13:27.365 ] 00:13:27.365 }' 00:13:27.365 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.365 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.931 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:27.931 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.931 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.931 87.12 IOPS, 261.38 MiB/s [2024-11-15T10:41:49.093Z] [2024-11-15 10:41:48.863239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:27.931 [2024-11-15 10:41:48.863275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.931 00:13:27.931 Latency(us) 00:13:27.931 [2024-11-15T10:41:49.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.931 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:27.931 raid_bdev1 : 8.10 86.29 258.88 0.00 0.00 14340.71 283.00 118679.74 00:13:27.931 [2024-11-15T10:41:49.093Z] =================================================================================================================== 00:13:27.931 [2024-11-15T10:41:49.093Z] Total : 86.29 258.88 0.00 0.00 14340.71 283.00 118679.74 00:13:27.931 [2024-11-15 10:41:48.979068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.931 [2024-11-15 10:41:48.979134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.931 [2024-11-15 10:41:48.979247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.931 [2024-11-15 10:41:48.979265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:27.931 { 00:13:27.931 "results": [ 00:13:27.931 { 00:13:27.931 "job": "raid_bdev1", 00:13:27.931 "core_mask": "0x1", 00:13:27.931 "workload": "randrw", 00:13:27.931 "percentage": 50, 00:13:27.931 "status": "finished", 00:13:27.931 "queue_depth": 2, 00:13:27.931 "io_size": 3145728, 00:13:27.931 "runtime": 8.100237, 00:13:27.931 "iops": 86.29377140446631, 00:13:27.931 "mibps": 258.8813142133989, 00:13:27.931 "io_failed": 0, 00:13:27.931 "io_timeout": 0, 00:13:27.931 "avg_latency_us": 14340.711811679024, 00:13:27.931 "min_latency_us": 282.99636363636364, 00:13:27.931 "max_latency_us": 118679.73818181817 00:13:27.931 } 00:13:27.931 ], 00:13:27.931 "core_count": 1 00:13:27.931 } 00:13:27.931 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.931 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.931 10:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:27.931 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.931 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.931 10:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.932 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:27.932 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:27.932 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:27.932 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:27.932 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.932 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:27.932 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:27.932 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:27.932 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:27.932 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:27.932 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:27.932 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:27.932 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:28.499 /dev/nbd0 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.499 1+0 records in 00:13:28.499 1+0 records out 00:13:28.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588657 s, 7.0 MB/s 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:28.499 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:28.757 /dev/nbd1 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.757 1+0 records in 00:13:28.757 1+0 records out 00:13:28.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633 s, 6.5 MB/s 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:28.757 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:29.014 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:29.014 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.014 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:29.014 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.014 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:29.014 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.014 10:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.273 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76654 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76654 ']' 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76654 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76654 00:13:29.605 killing process with pid 76654 00:13:29.605 Received shutdown signal, test time was about 9.759637 seconds 00:13:29.605 00:13:29.605 Latency(us) 00:13:29.605 [2024-11-15T10:41:50.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.605 [2024-11-15T10:41:50.767Z] =================================================================================================================== 00:13:29.605 [2024-11-15T10:41:50.767Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76654' 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76654 00:13:29.605 [2024-11-15 10:41:50.618572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:29.605 10:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76654 00:13:29.869 [2024-11-15 10:41:50.822584] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:30.808 10:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:30.808 00:13:30.808 real 0m13.032s 00:13:30.808 user 0m17.177s 00:13:30.808 sys 0m1.379s 00:13:30.808 10:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.808 10:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.808 ************************************ 00:13:30.808 END TEST raid_rebuild_test_io 00:13:30.808 ************************************ 00:13:30.808 10:41:51 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:30.808 10:41:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:30.808 10:41:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.808 10:41:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:31.067 ************************************ 00:13:31.067 START TEST raid_rebuild_test_sb_io 00:13:31.067 ************************************ 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:31.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77036 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77036 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77036 ']' 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.067 10:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.067 [2024-11-15 10:41:52.065249] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:13:31.067 [2024-11-15 10:41:52.065599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77036 ] 00:13:31.067 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:31.067 Zero copy mechanism will not be used. 00:13:31.326 [2024-11-15 10:41:52.240052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.326 [2024-11-15 10:41:52.371858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.585 [2024-11-15 10:41:52.578110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.585 [2024-11-15 10:41:52.578335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.152 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.152 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:32.152 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:32.152 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:32.152 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.152 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.152 BaseBdev1_malloc 00:13:32.152 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.152 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:32.152 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.152 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.152 [2024-11-15 10:41:53.153542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:32.153 [2024-11-15 10:41:53.153624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.153 [2024-11-15 10:41:53.153658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:32.153 [2024-11-15 10:41:53.153678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.153 [2024-11-15 10:41:53.156719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.153 [2024-11-15 10:41:53.156770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:32.153 BaseBdev1 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.153 BaseBdev2_malloc 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.153 [2024-11-15 10:41:53.209572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:32.153 [2024-11-15 10:41:53.209647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.153 [2024-11-15 10:41:53.209678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:32.153 [2024-11-15 10:41:53.209699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.153 [2024-11-15 10:41:53.212380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.153 [2024-11-15 10:41:53.212428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:32.153 BaseBdev2 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.153 spare_malloc 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.153 spare_delay 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.153 [2024-11-15 10:41:53.282860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:32.153 [2024-11-15 10:41:53.282934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.153 [2024-11-15 10:41:53.282965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:32.153 [2024-11-15 10:41:53.282984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.153 [2024-11-15 10:41:53.285798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.153 [2024-11-15 10:41:53.285849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:32.153 spare 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.153 [2024-11-15 10:41:53.290949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:32.153 [2024-11-15 10:41:53.293440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.153 [2024-11-15 10:41:53.293819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:32.153 [2024-11-15 10:41:53.293965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:32.153 [2024-11-15 10:41:53.294327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:32.153 [2024-11-15 10:41:53.294681] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:32.153 [2024-11-15 10:41:53.294809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:32.153 [2024-11-15 10:41:53.295146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.153 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.412 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.412 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.412 "name": "raid_bdev1", 00:13:32.412 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:32.412 "strip_size_kb": 0, 00:13:32.412 "state": "online", 00:13:32.412 "raid_level": "raid1", 00:13:32.412 "superblock": true, 00:13:32.412 "num_base_bdevs": 2, 00:13:32.412 "num_base_bdevs_discovered": 2, 00:13:32.412 "num_base_bdevs_operational": 2, 00:13:32.412 "base_bdevs_list": [ 00:13:32.412 { 00:13:32.412 "name": "BaseBdev1", 00:13:32.412 "uuid": "bfbad912-d00d-5ee7-8d69-698a34578368", 00:13:32.412 "is_configured": true, 00:13:32.412 "data_offset": 2048, 00:13:32.412 "data_size": 63488 00:13:32.412 }, 00:13:32.412 { 00:13:32.412 "name": "BaseBdev2", 00:13:32.412 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:32.412 "is_configured": true, 00:13:32.412 "data_offset": 2048, 00:13:32.412 "data_size": 63488 00:13:32.412 } 00:13:32.412 ] 00:13:32.412 }' 00:13:32.412 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.412 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.670 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:32.670 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.670 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:32.670 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.670 [2024-11-15 10:41:53.799623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.670 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.929 [2024-11-15 10:41:53.895244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.929 "name": "raid_bdev1", 00:13:32.929 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:32.929 "strip_size_kb": 0, 00:13:32.929 "state": "online", 00:13:32.929 "raid_level": "raid1", 00:13:32.929 "superblock": true, 00:13:32.929 "num_base_bdevs": 2, 00:13:32.929 "num_base_bdevs_discovered": 1, 00:13:32.929 "num_base_bdevs_operational": 1, 00:13:32.929 "base_bdevs_list": [ 00:13:32.929 { 00:13:32.929 "name": null, 00:13:32.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.929 "is_configured": false, 00:13:32.929 "data_offset": 0, 00:13:32.929 "data_size": 63488 00:13:32.929 }, 00:13:32.929 { 00:13:32.929 "name": "BaseBdev2", 00:13:32.929 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:32.929 "is_configured": true, 00:13:32.929 "data_offset": 2048, 00:13:32.929 "data_size": 63488 00:13:32.929 } 00:13:32.929 ] 00:13:32.929 }' 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.929 10:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.929 [2024-11-15 10:41:53.999522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:32.929 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:32.929 Zero copy mechanism will not be used. 00:13:32.929 Running I/O for 60 seconds... 00:13:33.496 10:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:33.496 10:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.496 10:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.496 [2024-11-15 10:41:54.415420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.496 10:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.496 10:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:33.496 [2024-11-15 10:41:54.500813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:33.496 [2024-11-15 10:41:54.503479] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.496 [2024-11-15 10:41:54.621897] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:33.496 [2024-11-15 10:41:54.622567] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:33.754 [2024-11-15 10:41:54.757141] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:33.754 [2024-11-15 10:41:54.757480] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:34.012 200.00 IOPS, 600.00 MiB/s [2024-11-15T10:41:55.174Z] [2024-11-15 10:41:55.008276] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:34.012 [2024-11-15 10:41:55.008941] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:34.012 [2024-11-15 10:41:55.136500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:34.012 [2024-11-15 10:41:55.137135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:34.271 [2024-11-15 10:41:55.389673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:34.271 [2024-11-15 10:41:55.390405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.530 "name": "raid_bdev1", 00:13:34.530 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:34.530 "strip_size_kb": 0, 00:13:34.530 "state": "online", 00:13:34.530 "raid_level": "raid1", 00:13:34.530 "superblock": true, 00:13:34.530 "num_base_bdevs": 2, 00:13:34.530 "num_base_bdevs_discovered": 2, 00:13:34.530 "num_base_bdevs_operational": 2, 00:13:34.530 "process": { 00:13:34.530 "type": "rebuild", 00:13:34.530 "target": "spare", 00:13:34.530 "progress": { 00:13:34.530 "blocks": 14336, 00:13:34.530 "percent": 22 00:13:34.530 } 00:13:34.530 }, 00:13:34.530 "base_bdevs_list": [ 00:13:34.530 { 00:13:34.530 "name": "spare", 00:13:34.530 "uuid": "c48c7ba3-30ad-5afb-bf84-3f2382be6e2b", 00:13:34.530 "is_configured": true, 00:13:34.530 "data_offset": 2048, 00:13:34.530 "data_size": 63488 00:13:34.530 }, 00:13:34.530 { 00:13:34.530 "name": "BaseBdev2", 00:13:34.530 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:34.530 "is_configured": true, 00:13:34.530 "data_offset": 2048, 00:13:34.530 "data_size": 63488 00:13:34.530 } 00:13:34.530 ] 00:13:34.530 }' 00:13:34.530 [2024-11-15 10:41:55.527296] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:34.530 [2024-11-15 10:41:55.527646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:34.530 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.531 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.531 [2024-11-15 10:41:55.646256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.531 [2024-11-15 10:41:55.654541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:34.531 [2024-11-15 10:41:55.655094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:34.789 [2024-11-15 10:41:55.756930] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:34.789 [2024-11-15 10:41:55.775453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.789 [2024-11-15 10:41:55.775495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.789 [2024-11-15 10:41:55.775521] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:34.789 [2024-11-15 10:41:55.827840] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.789 "name": "raid_bdev1", 00:13:34.789 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:34.789 "strip_size_kb": 0, 00:13:34.789 "state": "online", 00:13:34.789 "raid_level": "raid1", 00:13:34.789 "superblock": true, 00:13:34.789 "num_base_bdevs": 2, 00:13:34.789 "num_base_bdevs_discovered": 1, 00:13:34.789 "num_base_bdevs_operational": 1, 00:13:34.789 "base_bdevs_list": [ 00:13:34.789 { 00:13:34.789 "name": null, 00:13:34.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.789 "is_configured": false, 00:13:34.789 "data_offset": 0, 00:13:34.789 "data_size": 63488 00:13:34.789 }, 00:13:34.789 { 00:13:34.789 "name": "BaseBdev2", 00:13:34.789 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:34.789 "is_configured": true, 00:13:34.789 "data_offset": 2048, 00:13:34.789 "data_size": 63488 00:13:34.789 } 00:13:34.789 ] 00:13:34.789 }' 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.789 10:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.306 172.50 IOPS, 517.50 MiB/s [2024-11-15T10:41:56.468Z] 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.306 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.306 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.306 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.306 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.306 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.306 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.306 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.306 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.306 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.306 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.307 "name": "raid_bdev1", 00:13:35.307 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:35.307 "strip_size_kb": 0, 00:13:35.307 "state": "online", 00:13:35.307 "raid_level": "raid1", 00:13:35.307 "superblock": true, 00:13:35.307 "num_base_bdevs": 2, 00:13:35.307 "num_base_bdevs_discovered": 1, 00:13:35.307 "num_base_bdevs_operational": 1, 00:13:35.307 "base_bdevs_list": [ 00:13:35.307 { 00:13:35.307 "name": null, 00:13:35.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.307 "is_configured": false, 00:13:35.307 "data_offset": 0, 00:13:35.307 "data_size": 63488 00:13:35.307 }, 00:13:35.307 { 00:13:35.307 "name": "BaseBdev2", 00:13:35.307 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:35.307 "is_configured": true, 00:13:35.307 "data_offset": 2048, 00:13:35.307 "data_size": 63488 00:13:35.307 } 00:13:35.307 ] 00:13:35.307 }' 00:13:35.307 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.565 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.565 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.565 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.565 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.565 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.565 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.565 [2024-11-15 10:41:56.557080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.565 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.565 10:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:35.565 [2024-11-15 10:41:56.611506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:35.565 [2024-11-15 10:41:56.614362] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:35.823 [2024-11-15 10:41:56.724930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:35.823 [2024-11-15 10:41:56.725597] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:35.823 [2024-11-15 10:41:56.852972] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:35.824 [2024-11-15 10:41:56.853455] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.081 169.00 IOPS, 507.00 MiB/s [2024-11-15T10:41:57.243Z] [2024-11-15 10:41:57.200825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:36.339 [2024-11-15 10:41:57.344848] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:36.339 [2024-11-15 10:41:57.345110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:36.597 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.597 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.597 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.597 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.597 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.597 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.597 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.597 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.597 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.597 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.597 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.597 "name": "raid_bdev1", 00:13:36.597 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:36.597 "strip_size_kb": 0, 00:13:36.597 "state": "online", 00:13:36.597 "raid_level": "raid1", 00:13:36.597 "superblock": true, 00:13:36.597 "num_base_bdevs": 2, 00:13:36.597 "num_base_bdevs_discovered": 2, 00:13:36.597 "num_base_bdevs_operational": 2, 00:13:36.597 "process": { 00:13:36.597 "type": "rebuild", 00:13:36.597 "target": "spare", 00:13:36.597 "progress": { 00:13:36.597 "blocks": 12288, 00:13:36.597 "percent": 19 00:13:36.597 } 00:13:36.597 }, 00:13:36.597 "base_bdevs_list": [ 00:13:36.598 { 00:13:36.598 "name": "spare", 00:13:36.598 "uuid": "c48c7ba3-30ad-5afb-bf84-3f2382be6e2b", 00:13:36.598 "is_configured": true, 00:13:36.598 "data_offset": 2048, 00:13:36.598 "data_size": 63488 00:13:36.598 }, 00:13:36.598 { 00:13:36.598 "name": "BaseBdev2", 00:13:36.598 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:36.598 "is_configured": true, 00:13:36.598 "data_offset": 2048, 00:13:36.598 "data_size": 63488 00:13:36.598 } 00:13:36.598 ] 00:13:36.598 }' 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.598 [2024-11-15 10:41:57.659052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:36.598 [2024-11-15 10:41:57.659529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:36.598 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=446 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.598 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.856 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.856 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.856 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.856 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.856 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.856 "name": "raid_bdev1", 00:13:36.856 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:36.856 "strip_size_kb": 0, 00:13:36.856 "state": "online", 00:13:36.856 "raid_level": "raid1", 00:13:36.856 "superblock": true, 00:13:36.856 "num_base_bdevs": 2, 00:13:36.856 "num_base_bdevs_discovered": 2, 00:13:36.856 "num_base_bdevs_operational": 2, 00:13:36.856 "process": { 00:13:36.856 "type": "rebuild", 00:13:36.856 "target": "spare", 00:13:36.856 "progress": { 00:13:36.856 "blocks": 14336, 00:13:36.856 "percent": 22 00:13:36.856 } 00:13:36.856 }, 00:13:36.856 "base_bdevs_list": [ 00:13:36.856 { 00:13:36.856 "name": "spare", 00:13:36.856 "uuid": "c48c7ba3-30ad-5afb-bf84-3f2382be6e2b", 00:13:36.856 "is_configured": true, 00:13:36.856 "data_offset": 2048, 00:13:36.856 "data_size": 63488 00:13:36.856 }, 00:13:36.856 { 00:13:36.856 "name": "BaseBdev2", 00:13:36.856 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:36.856 "is_configured": true, 00:13:36.856 "data_offset": 2048, 00:13:36.856 "data_size": 63488 00:13:36.856 } 00:13:36.856 ] 00:13:36.856 }' 00:13:36.856 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.856 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.856 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.856 [2024-11-15 10:41:57.871770] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:36.856 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.856 10:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:37.115 141.50 IOPS, 424.50 MiB/s [2024-11-15T10:41:58.277Z] [2024-11-15 10:41:58.219262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:37.682 [2024-11-15 10:41:58.669973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:37.941 10:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.941 10:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.941 10:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.941 10:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.941 10:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.941 10:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.941 10:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.941 10:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.941 10:41:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.941 10:41:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.941 [2024-11-15 10:41:58.928967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:37.941 10:41:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.941 10:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.941 "name": "raid_bdev1", 00:13:37.941 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:37.941 "strip_size_kb": 0, 00:13:37.941 "state": "online", 00:13:37.941 "raid_level": "raid1", 00:13:37.941 "superblock": true, 00:13:37.941 "num_base_bdevs": 2, 00:13:37.941 "num_base_bdevs_discovered": 2, 00:13:37.941 "num_base_bdevs_operational": 2, 00:13:37.941 "process": { 00:13:37.941 "type": "rebuild", 00:13:37.941 "target": "spare", 00:13:37.941 "progress": { 00:13:37.941 "blocks": 30720, 00:13:37.941 "percent": 48 00:13:37.941 } 00:13:37.941 }, 00:13:37.941 "base_bdevs_list": [ 00:13:37.941 { 00:13:37.941 "name": "spare", 00:13:37.941 "uuid": "c48c7ba3-30ad-5afb-bf84-3f2382be6e2b", 00:13:37.941 "is_configured": true, 00:13:37.941 "data_offset": 2048, 00:13:37.941 "data_size": 63488 00:13:37.941 }, 00:13:37.941 { 00:13:37.941 "name": "BaseBdev2", 00:13:37.941 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:37.941 "is_configured": true, 00:13:37.941 "data_offset": 2048, 00:13:37.941 "data_size": 63488 00:13:37.941 } 00:13:37.941 ] 00:13:37.941 }' 00:13:37.941 10:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.941 123.20 IOPS, 369.60 MiB/s [2024-11-15T10:41:59.103Z] 10:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.941 10:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.941 [2024-11-15 10:41:59.049050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:37.941 10:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.941 10:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.508 [2024-11-15 10:41:59.396837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:38.508 [2024-11-15 10:41:59.617930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:38.508 [2024-11-15 10:41:59.618396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:39.074 [2024-11-15 10:41:59.941987] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:39.074 108.00 IOPS, 324.00 MiB/s [2024-11-15T10:42:00.236Z] 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.074 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.074 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.074 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.074 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.074 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.074 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.074 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.074 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.074 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.074 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.074 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.074 "name": "raid_bdev1", 00:13:39.074 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:39.074 "strip_size_kb": 0, 00:13:39.074 "state": "online", 00:13:39.074 "raid_level": "raid1", 00:13:39.074 "superblock": true, 00:13:39.074 "num_base_bdevs": 2, 00:13:39.074 "num_base_bdevs_discovered": 2, 00:13:39.074 "num_base_bdevs_operational": 2, 00:13:39.074 "process": { 00:13:39.074 "type": "rebuild", 00:13:39.074 "target": "spare", 00:13:39.074 "progress": { 00:13:39.074 "blocks": 45056, 00:13:39.074 "percent": 70 00:13:39.074 } 00:13:39.074 }, 00:13:39.074 "base_bdevs_list": [ 00:13:39.074 { 00:13:39.074 "name": "spare", 00:13:39.074 "uuid": "c48c7ba3-30ad-5afb-bf84-3f2382be6e2b", 00:13:39.074 "is_configured": true, 00:13:39.074 "data_offset": 2048, 00:13:39.074 "data_size": 63488 00:13:39.074 }, 00:13:39.074 { 00:13:39.074 "name": "BaseBdev2", 00:13:39.074 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:39.074 "is_configured": true, 00:13:39.074 "data_offset": 2048, 00:13:39.074 "data_size": 63488 00:13:39.074 } 00:13:39.074 ] 00:13:39.074 }' 00:13:39.074 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.074 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.074 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.332 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.332 10:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.898 [2024-11-15 10:42:00.828160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:39.898 98.57 IOPS, 295.71 MiB/s [2024-11-15T10:42:01.060Z] [2024-11-15 10:42:01.050014] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:40.155 [2024-11-15 10:42:01.157593] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:40.155 [2024-11-15 10:42:01.160987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.155 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.155 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.155 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.155 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.155 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.155 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.155 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.155 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.155 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.155 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.155 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.413 "name": "raid_bdev1", 00:13:40.413 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:40.413 "strip_size_kb": 0, 00:13:40.413 "state": "online", 00:13:40.413 "raid_level": "raid1", 00:13:40.413 "superblock": true, 00:13:40.413 "num_base_bdevs": 2, 00:13:40.413 "num_base_bdevs_discovered": 2, 00:13:40.413 "num_base_bdevs_operational": 2, 00:13:40.413 "base_bdevs_list": [ 00:13:40.413 { 00:13:40.413 "name": "spare", 00:13:40.413 "uuid": "c48c7ba3-30ad-5afb-bf84-3f2382be6e2b", 00:13:40.413 "is_configured": true, 00:13:40.413 "data_offset": 2048, 00:13:40.413 "data_size": 63488 00:13:40.413 }, 00:13:40.413 { 00:13:40.413 "name": "BaseBdev2", 00:13:40.413 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:40.413 "is_configured": true, 00:13:40.413 "data_offset": 2048, 00:13:40.413 "data_size": 63488 00:13:40.413 } 00:13:40.413 ] 00:13:40.413 }' 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.413 "name": "raid_bdev1", 00:13:40.413 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:40.413 "strip_size_kb": 0, 00:13:40.413 "state": "online", 00:13:40.413 "raid_level": "raid1", 00:13:40.413 "superblock": true, 00:13:40.413 "num_base_bdevs": 2, 00:13:40.413 "num_base_bdevs_discovered": 2, 00:13:40.413 "num_base_bdevs_operational": 2, 00:13:40.413 "base_bdevs_list": [ 00:13:40.413 { 00:13:40.413 "name": "spare", 00:13:40.413 "uuid": "c48c7ba3-30ad-5afb-bf84-3f2382be6e2b", 00:13:40.413 "is_configured": true, 00:13:40.413 "data_offset": 2048, 00:13:40.413 "data_size": 63488 00:13:40.413 }, 00:13:40.413 { 00:13:40.413 "name": "BaseBdev2", 00:13:40.413 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:40.413 "is_configured": true, 00:13:40.413 "data_offset": 2048, 00:13:40.413 "data_size": 63488 00:13:40.413 } 00:13:40.413 ] 00:13:40.413 }' 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:40.413 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.671 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.671 "name": "raid_bdev1", 00:13:40.671 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:40.671 "strip_size_kb": 0, 00:13:40.671 "state": "online", 00:13:40.671 "raid_level": "raid1", 00:13:40.671 "superblock": true, 00:13:40.671 "num_base_bdevs": 2, 00:13:40.671 "num_base_bdevs_discovered": 2, 00:13:40.671 "num_base_bdevs_operational": 2, 00:13:40.671 "base_bdevs_list": [ 00:13:40.672 { 00:13:40.672 "name": "spare", 00:13:40.672 "uuid": "c48c7ba3-30ad-5afb-bf84-3f2382be6e2b", 00:13:40.672 "is_configured": true, 00:13:40.672 "data_offset": 2048, 00:13:40.672 "data_size": 63488 00:13:40.672 }, 00:13:40.672 { 00:13:40.672 "name": "BaseBdev2", 00:13:40.672 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:40.672 "is_configured": true, 00:13:40.672 "data_offset": 2048, 00:13:40.672 "data_size": 63488 00:13:40.672 } 00:13:40.672 ] 00:13:40.672 }' 00:13:40.672 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.672 10:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.188 89.38 IOPS, 268.12 MiB/s [2024-11-15T10:42:02.350Z] 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.188 [2024-11-15 10:42:02.137093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.188 [2024-11-15 10:42:02.137291] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.188 00:13:41.188 Latency(us) 00:13:41.188 [2024-11-15T10:42:02.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.188 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:41.188 raid_bdev1 : 8.15 88.75 266.24 0.00 0.00 13917.05 271.83 119156.36 00:13:41.188 [2024-11-15T10:42:02.350Z] =================================================================================================================== 00:13:41.188 [2024-11-15T10:42:02.350Z] Total : 88.75 266.24 0.00 0.00 13917.05 271.83 119156.36 00:13:41.188 { 00:13:41.188 "results": [ 00:13:41.188 { 00:13:41.188 "job": "raid_bdev1", 00:13:41.188 "core_mask": "0x1", 00:13:41.188 "workload": "randrw", 00:13:41.188 "percentage": 50, 00:13:41.188 "status": "finished", 00:13:41.188 "queue_depth": 2, 00:13:41.188 "io_size": 3145728, 00:13:41.188 "runtime": 8.146662, 00:13:41.188 "iops": 88.74800501113216, 00:13:41.188 "mibps": 266.2440150333965, 00:13:41.188 "io_failed": 0, 00:13:41.188 "io_timeout": 0, 00:13:41.188 "avg_latency_us": 13917.052282157676, 00:13:41.188 "min_latency_us": 271.82545454545453, 00:13:41.188 "max_latency_us": 119156.36363636363 00:13:41.188 } 00:13:41.188 ], 00:13:41.188 "core_count": 1 00:13:41.188 } 00:13:41.188 [2024-11-15 10:42:02.169653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.188 [2024-11-15 10:42:02.169707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.188 [2024-11-15 10:42:02.169808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.188 [2024-11-15 10:42:02.169830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.188 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:41.446 /dev/nbd0 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:41.446 1+0 records in 00:13:41.446 1+0 records out 00:13:41.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472524 s, 8.7 MB/s 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:41.446 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:41.447 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:41.447 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.447 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:41.447 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:41.447 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:41.447 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.447 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:41.447 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:41.447 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:41.447 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:41.447 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:41.447 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:41.447 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.447 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:42.014 /dev/nbd1 00:13:42.014 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:42.014 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:42.014 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:42.014 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:42.014 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:42.014 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:42.014 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:42.014 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:42.014 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:42.014 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:42.014 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.014 1+0 records in 00:13:42.014 1+0 records out 00:13:42.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592567 s, 6.9 MB/s 00:13:42.014 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.014 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:42.015 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.015 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:42.015 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:42.015 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.015 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.015 10:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:42.015 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:42.015 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.015 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:42.015 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.015 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:42.015 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.015 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.277 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.844 [2024-11-15 10:42:03.727891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:42.844 [2024-11-15 10:42:03.727958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.844 [2024-11-15 10:42:03.728003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:42.844 [2024-11-15 10:42:03.728023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.844 [2024-11-15 10:42:03.731042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.844 [2024-11-15 10:42:03.731088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:42.844 [2024-11-15 10:42:03.731210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:42.844 [2024-11-15 10:42:03.731274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.844 [2024-11-15 10:42:03.731443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.844 spare 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.844 [2024-11-15 10:42:03.831621] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:42.844 [2024-11-15 10:42:03.831652] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:42.844 [2024-11-15 10:42:03.832009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:42.844 [2024-11-15 10:42:03.832208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:42.844 [2024-11-15 10:42:03.832234] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:42.844 [2024-11-15 10:42:03.832457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.844 "name": "raid_bdev1", 00:13:42.844 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:42.844 "strip_size_kb": 0, 00:13:42.844 "state": "online", 00:13:42.844 "raid_level": "raid1", 00:13:42.844 "superblock": true, 00:13:42.844 "num_base_bdevs": 2, 00:13:42.844 "num_base_bdevs_discovered": 2, 00:13:42.844 "num_base_bdevs_operational": 2, 00:13:42.844 "base_bdevs_list": [ 00:13:42.844 { 00:13:42.844 "name": "spare", 00:13:42.844 "uuid": "c48c7ba3-30ad-5afb-bf84-3f2382be6e2b", 00:13:42.844 "is_configured": true, 00:13:42.844 "data_offset": 2048, 00:13:42.844 "data_size": 63488 00:13:42.844 }, 00:13:42.844 { 00:13:42.844 "name": "BaseBdev2", 00:13:42.844 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:42.844 "is_configured": true, 00:13:42.844 "data_offset": 2048, 00:13:42.844 "data_size": 63488 00:13:42.844 } 00:13:42.844 ] 00:13:42.844 }' 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.844 10:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.412 "name": "raid_bdev1", 00:13:43.412 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:43.412 "strip_size_kb": 0, 00:13:43.412 "state": "online", 00:13:43.412 "raid_level": "raid1", 00:13:43.412 "superblock": true, 00:13:43.412 "num_base_bdevs": 2, 00:13:43.412 "num_base_bdevs_discovered": 2, 00:13:43.412 "num_base_bdevs_operational": 2, 00:13:43.412 "base_bdevs_list": [ 00:13:43.412 { 00:13:43.412 "name": "spare", 00:13:43.412 "uuid": "c48c7ba3-30ad-5afb-bf84-3f2382be6e2b", 00:13:43.412 "is_configured": true, 00:13:43.412 "data_offset": 2048, 00:13:43.412 "data_size": 63488 00:13:43.412 }, 00:13:43.412 { 00:13:43.412 "name": "BaseBdev2", 00:13:43.412 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:43.412 "is_configured": true, 00:13:43.412 "data_offset": 2048, 00:13:43.412 "data_size": 63488 00:13:43.412 } 00:13:43.412 ] 00:13:43.412 }' 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.412 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.759 [2024-11-15 10:42:04.576851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.759 "name": "raid_bdev1", 00:13:43.759 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:43.759 "strip_size_kb": 0, 00:13:43.759 "state": "online", 00:13:43.759 "raid_level": "raid1", 00:13:43.759 "superblock": true, 00:13:43.759 "num_base_bdevs": 2, 00:13:43.759 "num_base_bdevs_discovered": 1, 00:13:43.759 "num_base_bdevs_operational": 1, 00:13:43.759 "base_bdevs_list": [ 00:13:43.759 { 00:13:43.759 "name": null, 00:13:43.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.759 "is_configured": false, 00:13:43.759 "data_offset": 0, 00:13:43.759 "data_size": 63488 00:13:43.759 }, 00:13:43.759 { 00:13:43.759 "name": "BaseBdev2", 00:13:43.759 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:43.759 "is_configured": true, 00:13:43.759 "data_offset": 2048, 00:13:43.759 "data_size": 63488 00:13:43.759 } 00:13:43.759 ] 00:13:43.759 }' 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.759 10:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.018 10:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:44.018 10:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.018 10:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.018 [2024-11-15 10:42:05.113098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.018 [2024-11-15 10:42:05.113389] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:44.018 [2024-11-15 10:42:05.113411] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:44.018 [2024-11-15 10:42:05.113466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.018 [2024-11-15 10:42:05.129980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:44.018 10:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.018 10:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:44.018 [2024-11-15 10:42:05.132733] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.396 "name": "raid_bdev1", 00:13:45.396 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:45.396 "strip_size_kb": 0, 00:13:45.396 "state": "online", 00:13:45.396 "raid_level": "raid1", 00:13:45.396 "superblock": true, 00:13:45.396 "num_base_bdevs": 2, 00:13:45.396 "num_base_bdevs_discovered": 2, 00:13:45.396 "num_base_bdevs_operational": 2, 00:13:45.396 "process": { 00:13:45.396 "type": "rebuild", 00:13:45.396 "target": "spare", 00:13:45.396 "progress": { 00:13:45.396 "blocks": 20480, 00:13:45.396 "percent": 32 00:13:45.396 } 00:13:45.396 }, 00:13:45.396 "base_bdevs_list": [ 00:13:45.396 { 00:13:45.396 "name": "spare", 00:13:45.396 "uuid": "c48c7ba3-30ad-5afb-bf84-3f2382be6e2b", 00:13:45.396 "is_configured": true, 00:13:45.396 "data_offset": 2048, 00:13:45.396 "data_size": 63488 00:13:45.396 }, 00:13:45.396 { 00:13:45.396 "name": "BaseBdev2", 00:13:45.396 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:45.396 "is_configured": true, 00:13:45.396 "data_offset": 2048, 00:13:45.396 "data_size": 63488 00:13:45.396 } 00:13:45.396 ] 00:13:45.396 }' 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.396 [2024-11-15 10:42:06.294317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.396 [2024-11-15 10:42:06.341355] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:45.396 [2024-11-15 10:42:06.341449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.396 [2024-11-15 10:42:06.341476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.396 [2024-11-15 10:42:06.341486] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:45.396 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.397 "name": "raid_bdev1", 00:13:45.397 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:45.397 "strip_size_kb": 0, 00:13:45.397 "state": "online", 00:13:45.397 "raid_level": "raid1", 00:13:45.397 "superblock": true, 00:13:45.397 "num_base_bdevs": 2, 00:13:45.397 "num_base_bdevs_discovered": 1, 00:13:45.397 "num_base_bdevs_operational": 1, 00:13:45.397 "base_bdevs_list": [ 00:13:45.397 { 00:13:45.397 "name": null, 00:13:45.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.397 "is_configured": false, 00:13:45.397 "data_offset": 0, 00:13:45.397 "data_size": 63488 00:13:45.397 }, 00:13:45.397 { 00:13:45.397 "name": "BaseBdev2", 00:13:45.397 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:45.397 "is_configured": true, 00:13:45.397 "data_offset": 2048, 00:13:45.397 "data_size": 63488 00:13:45.397 } 00:13:45.397 ] 00:13:45.397 }' 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.397 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.964 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.964 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.964 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.964 [2024-11-15 10:42:06.924393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.964 [2024-11-15 10:42:06.924635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.964 [2024-11-15 10:42:06.924808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:45.964 [2024-11-15 10:42:06.924847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.964 [2024-11-15 10:42:06.925489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.964 [2024-11-15 10:42:06.925539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.964 [2024-11-15 10:42:06.925673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:45.964 [2024-11-15 10:42:06.925701] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:45.964 [2024-11-15 10:42:06.925727] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:45.964 [2024-11-15 10:42:06.925768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.964 [2024-11-15 10:42:06.941961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:45.964 spare 00:13:45.964 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.964 10:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:45.964 [2024-11-15 10:42:06.944425] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.898 10:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.898 10:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.898 10:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.898 10:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.898 10:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.898 10:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.898 10:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.898 10:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.898 10:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.898 10:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.898 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.898 "name": "raid_bdev1", 00:13:46.898 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:46.898 "strip_size_kb": 0, 00:13:46.898 "state": "online", 00:13:46.898 "raid_level": "raid1", 00:13:46.898 "superblock": true, 00:13:46.898 "num_base_bdevs": 2, 00:13:46.898 "num_base_bdevs_discovered": 2, 00:13:46.898 "num_base_bdevs_operational": 2, 00:13:46.898 "process": { 00:13:46.898 "type": "rebuild", 00:13:46.898 "target": "spare", 00:13:46.898 "progress": { 00:13:46.898 "blocks": 20480, 00:13:46.898 "percent": 32 00:13:46.898 } 00:13:46.898 }, 00:13:46.898 "base_bdevs_list": [ 00:13:46.898 { 00:13:46.898 "name": "spare", 00:13:46.898 "uuid": "c48c7ba3-30ad-5afb-bf84-3f2382be6e2b", 00:13:46.898 "is_configured": true, 00:13:46.898 "data_offset": 2048, 00:13:46.898 "data_size": 63488 00:13:46.898 }, 00:13:46.898 { 00:13:46.898 "name": "BaseBdev2", 00:13:46.898 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:46.898 "is_configured": true, 00:13:46.898 "data_offset": 2048, 00:13:46.898 "data_size": 63488 00:13:46.898 } 00:13:46.898 ] 00:13:46.898 }' 00:13:46.898 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.898 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.898 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.156 [2024-11-15 10:42:08.106391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.156 [2024-11-15 10:42:08.153469] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:47.156 [2024-11-15 10:42:08.153747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.156 [2024-11-15 10:42:08.153871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.156 [2024-11-15 10:42:08.153928] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.156 "name": "raid_bdev1", 00:13:47.156 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:47.156 "strip_size_kb": 0, 00:13:47.156 "state": "online", 00:13:47.156 "raid_level": "raid1", 00:13:47.156 "superblock": true, 00:13:47.156 "num_base_bdevs": 2, 00:13:47.156 "num_base_bdevs_discovered": 1, 00:13:47.156 "num_base_bdevs_operational": 1, 00:13:47.156 "base_bdevs_list": [ 00:13:47.156 { 00:13:47.156 "name": null, 00:13:47.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.156 "is_configured": false, 00:13:47.156 "data_offset": 0, 00:13:47.156 "data_size": 63488 00:13:47.156 }, 00:13:47.156 { 00:13:47.156 "name": "BaseBdev2", 00:13:47.156 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:47.156 "is_configured": true, 00:13:47.156 "data_offset": 2048, 00:13:47.156 "data_size": 63488 00:13:47.156 } 00:13:47.156 ] 00:13:47.156 }' 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.156 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.722 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.722 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.722 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.722 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.722 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.722 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.722 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.722 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.722 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.722 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.722 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.722 "name": "raid_bdev1", 00:13:47.722 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:47.722 "strip_size_kb": 0, 00:13:47.722 "state": "online", 00:13:47.722 "raid_level": "raid1", 00:13:47.722 "superblock": true, 00:13:47.722 "num_base_bdevs": 2, 00:13:47.722 "num_base_bdevs_discovered": 1, 00:13:47.722 "num_base_bdevs_operational": 1, 00:13:47.722 "base_bdevs_list": [ 00:13:47.722 { 00:13:47.722 "name": null, 00:13:47.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.722 "is_configured": false, 00:13:47.722 "data_offset": 0, 00:13:47.722 "data_size": 63488 00:13:47.722 }, 00:13:47.722 { 00:13:47.722 "name": "BaseBdev2", 00:13:47.722 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:47.722 "is_configured": true, 00:13:47.722 "data_offset": 2048, 00:13:47.722 "data_size": 63488 00:13:47.722 } 00:13:47.722 ] 00:13:47.722 }' 00:13:47.722 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.722 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.722 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.020 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.021 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:48.021 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.021 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.021 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.021 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:48.021 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.021 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.021 [2024-11-15 10:42:08.897430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:48.021 [2024-11-15 10:42:08.897514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.021 [2024-11-15 10:42:08.897547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:48.021 [2024-11-15 10:42:08.897565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.021 [2024-11-15 10:42:08.898139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.021 [2024-11-15 10:42:08.898176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:48.021 [2024-11-15 10:42:08.898272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:48.021 [2024-11-15 10:42:08.898303] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:48.021 [2024-11-15 10:42:08.898315] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:48.021 [2024-11-15 10:42:08.898330] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:48.021 BaseBdev1 00:13:48.021 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.021 10:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.987 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.987 "name": "raid_bdev1", 00:13:48.987 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:48.987 "strip_size_kb": 0, 00:13:48.987 "state": "online", 00:13:48.987 "raid_level": "raid1", 00:13:48.987 "superblock": true, 00:13:48.987 "num_base_bdevs": 2, 00:13:48.987 "num_base_bdevs_discovered": 1, 00:13:48.987 "num_base_bdevs_operational": 1, 00:13:48.987 "base_bdevs_list": [ 00:13:48.987 { 00:13:48.987 "name": null, 00:13:48.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.987 "is_configured": false, 00:13:48.987 "data_offset": 0, 00:13:48.987 "data_size": 63488 00:13:48.987 }, 00:13:48.987 { 00:13:48.987 "name": "BaseBdev2", 00:13:48.987 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:48.987 "is_configured": true, 00:13:48.987 "data_offset": 2048, 00:13:48.987 "data_size": 63488 00:13:48.987 } 00:13:48.987 ] 00:13:48.988 }' 00:13:48.988 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.988 10:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.555 "name": "raid_bdev1", 00:13:49.555 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:49.555 "strip_size_kb": 0, 00:13:49.555 "state": "online", 00:13:49.555 "raid_level": "raid1", 00:13:49.555 "superblock": true, 00:13:49.555 "num_base_bdevs": 2, 00:13:49.555 "num_base_bdevs_discovered": 1, 00:13:49.555 "num_base_bdevs_operational": 1, 00:13:49.555 "base_bdevs_list": [ 00:13:49.555 { 00:13:49.555 "name": null, 00:13:49.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.555 "is_configured": false, 00:13:49.555 "data_offset": 0, 00:13:49.555 "data_size": 63488 00:13:49.555 }, 00:13:49.555 { 00:13:49.555 "name": "BaseBdev2", 00:13:49.555 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:49.555 "is_configured": true, 00:13:49.555 "data_offset": 2048, 00:13:49.555 "data_size": 63488 00:13:49.555 } 00:13:49.555 ] 00:13:49.555 }' 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.555 [2024-11-15 10:42:10.574241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.555 [2024-11-15 10:42:10.574456] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:49.555 [2024-11-15 10:42:10.574480] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:49.555 request: 00:13:49.555 { 00:13:49.555 "base_bdev": "BaseBdev1", 00:13:49.555 "raid_bdev": "raid_bdev1", 00:13:49.555 "method": "bdev_raid_add_base_bdev", 00:13:49.555 "req_id": 1 00:13:49.555 } 00:13:49.555 Got JSON-RPC error response 00:13:49.555 response: 00:13:49.555 { 00:13:49.555 "code": -22, 00:13:49.555 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:49.555 } 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.555 10:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.493 "name": "raid_bdev1", 00:13:50.493 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:50.493 "strip_size_kb": 0, 00:13:50.493 "state": "online", 00:13:50.493 "raid_level": "raid1", 00:13:50.493 "superblock": true, 00:13:50.493 "num_base_bdevs": 2, 00:13:50.493 "num_base_bdevs_discovered": 1, 00:13:50.493 "num_base_bdevs_operational": 1, 00:13:50.493 "base_bdevs_list": [ 00:13:50.493 { 00:13:50.493 "name": null, 00:13:50.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.493 "is_configured": false, 00:13:50.493 "data_offset": 0, 00:13:50.493 "data_size": 63488 00:13:50.493 }, 00:13:50.493 { 00:13:50.493 "name": "BaseBdev2", 00:13:50.493 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:50.493 "is_configured": true, 00:13:50.493 "data_offset": 2048, 00:13:50.493 "data_size": 63488 00:13:50.493 } 00:13:50.493 ] 00:13:50.493 }' 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.493 10:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.062 "name": "raid_bdev1", 00:13:51.062 "uuid": "f7a21625-6079-4e3f-89b5-2c442db550ae", 00:13:51.062 "strip_size_kb": 0, 00:13:51.062 "state": "online", 00:13:51.062 "raid_level": "raid1", 00:13:51.062 "superblock": true, 00:13:51.062 "num_base_bdevs": 2, 00:13:51.062 "num_base_bdevs_discovered": 1, 00:13:51.062 "num_base_bdevs_operational": 1, 00:13:51.062 "base_bdevs_list": [ 00:13:51.062 { 00:13:51.062 "name": null, 00:13:51.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.062 "is_configured": false, 00:13:51.062 "data_offset": 0, 00:13:51.062 "data_size": 63488 00:13:51.062 }, 00:13:51.062 { 00:13:51.062 "name": "BaseBdev2", 00:13:51.062 "uuid": "b1342906-4a7a-5542-9b52-857ee07198fc", 00:13:51.062 "is_configured": true, 00:13:51.062 "data_offset": 2048, 00:13:51.062 "data_size": 63488 00:13:51.062 } 00:13:51.062 ] 00:13:51.062 }' 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77036 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77036 ']' 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77036 00:13:51.062 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:51.322 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.322 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77036 00:13:51.322 killing process with pid 77036 00:13:51.322 Received shutdown signal, test time was about 18.245469 seconds 00:13:51.322 00:13:51.322 Latency(us) 00:13:51.322 [2024-11-15T10:42:12.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.322 [2024-11-15T10:42:12.484Z] =================================================================================================================== 00:13:51.322 [2024-11-15T10:42:12.484Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:51.322 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.322 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.322 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77036' 00:13:51.322 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77036 00:13:51.322 [2024-11-15 10:42:12.247725] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.322 10:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77036 00:13:51.322 [2024-11-15 10:42:12.247880] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.322 [2024-11-15 10:42:12.247964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.322 [2024-11-15 10:42:12.247979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:51.322 [2024-11-15 10:42:12.451636] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:52.699 ************************************ 00:13:52.699 END TEST raid_rebuild_test_sb_io 00:13:52.699 ************************************ 00:13:52.699 00:13:52.699 real 0m21.579s 00:13:52.699 user 0m29.470s 00:13:52.699 sys 0m1.869s 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.699 10:42:13 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:52.699 10:42:13 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:52.699 10:42:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:52.699 10:42:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.699 10:42:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.699 ************************************ 00:13:52.699 START TEST raid_rebuild_test 00:13:52.699 ************************************ 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77737 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77737 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77737 ']' 00:13:52.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.699 10:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.699 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:52.699 Zero copy mechanism will not be used. 00:13:52.699 [2024-11-15 10:42:13.715071] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:13:52.699 [2024-11-15 10:42:13.715271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77737 ] 00:13:52.958 [2024-11-15 10:42:13.887431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.958 [2024-11-15 10:42:14.021306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.217 [2024-11-15 10:42:14.231962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.217 [2024-11-15 10:42:14.232034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.783 BaseBdev1_malloc 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.783 [2024-11-15 10:42:14.796772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:53.783 [2024-11-15 10:42:14.796891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.783 [2024-11-15 10:42:14.796945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:53.783 [2024-11-15 10:42:14.796991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.783 [2024-11-15 10:42:14.800012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.783 [2024-11-15 10:42:14.800065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:53.783 BaseBdev1 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.783 BaseBdev2_malloc 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.783 [2024-11-15 10:42:14.853855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:53.783 [2024-11-15 10:42:14.853959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.783 [2024-11-15 10:42:14.853987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:53.783 [2024-11-15 10:42:14.854007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.783 [2024-11-15 10:42:14.857145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.783 [2024-11-15 10:42:14.857212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:53.783 BaseBdev2 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.783 BaseBdev3_malloc 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.783 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.783 [2024-11-15 10:42:14.917145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:53.784 [2024-11-15 10:42:14.917244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.784 [2024-11-15 10:42:14.917276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:53.784 [2024-11-15 10:42:14.917296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.784 [2024-11-15 10:42:14.920164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.784 [2024-11-15 10:42:14.920229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:53.784 BaseBdev3 00:13:53.784 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.784 10:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.784 10:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:53.784 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.784 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.043 BaseBdev4_malloc 00:13:54.043 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.043 10:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:54.043 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.043 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.043 [2024-11-15 10:42:14.973784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:54.043 [2024-11-15 10:42:14.973853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.043 [2024-11-15 10:42:14.973881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:54.043 [2024-11-15 10:42:14.973900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.043 [2024-11-15 10:42:14.976718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.043 [2024-11-15 10:42:14.976776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:54.043 BaseBdev4 00:13:54.043 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.043 10:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:54.043 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.043 10:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.043 spare_malloc 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.043 spare_delay 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.043 [2024-11-15 10:42:15.035075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:54.043 [2024-11-15 10:42:15.035166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.043 [2024-11-15 10:42:15.035210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:54.043 [2024-11-15 10:42:15.035242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.043 [2024-11-15 10:42:15.038402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.043 [2024-11-15 10:42:15.038585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:54.043 spare 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.043 [2024-11-15 10:42:15.043092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.043 [2024-11-15 10:42:15.045644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.043 [2024-11-15 10:42:15.045855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.043 [2024-11-15 10:42:15.045982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:54.043 [2024-11-15 10:42:15.046166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:54.043 [2024-11-15 10:42:15.046295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:54.043 [2024-11-15 10:42:15.046742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:54.043 [2024-11-15 10:42:15.046977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:54.043 [2024-11-15 10:42:15.046998] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:54.043 [2024-11-15 10:42:15.047235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.043 "name": "raid_bdev1", 00:13:54.043 "uuid": "5df723eb-1a31-488d-b0e6-14472bfbcbe1", 00:13:54.043 "strip_size_kb": 0, 00:13:54.043 "state": "online", 00:13:54.043 "raid_level": "raid1", 00:13:54.043 "superblock": false, 00:13:54.043 "num_base_bdevs": 4, 00:13:54.043 "num_base_bdevs_discovered": 4, 00:13:54.043 "num_base_bdevs_operational": 4, 00:13:54.043 "base_bdevs_list": [ 00:13:54.043 { 00:13:54.043 "name": "BaseBdev1", 00:13:54.043 "uuid": "e3b606bb-186c-5ba8-b4f5-3bc4bc6a7a46", 00:13:54.043 "is_configured": true, 00:13:54.043 "data_offset": 0, 00:13:54.043 "data_size": 65536 00:13:54.043 }, 00:13:54.043 { 00:13:54.043 "name": "BaseBdev2", 00:13:54.043 "uuid": "dfcc8c2c-5119-5556-ac0f-f6a016169d67", 00:13:54.043 "is_configured": true, 00:13:54.043 "data_offset": 0, 00:13:54.043 "data_size": 65536 00:13:54.043 }, 00:13:54.043 { 00:13:54.043 "name": "BaseBdev3", 00:13:54.043 "uuid": "baca300a-32ef-56e9-b30d-7c8b20473ace", 00:13:54.043 "is_configured": true, 00:13:54.043 "data_offset": 0, 00:13:54.043 "data_size": 65536 00:13:54.043 }, 00:13:54.043 { 00:13:54.043 "name": "BaseBdev4", 00:13:54.043 "uuid": "8967d249-a173-58ce-b5ad-b45cfa3c868c", 00:13:54.043 "is_configured": true, 00:13:54.043 "data_offset": 0, 00:13:54.043 "data_size": 65536 00:13:54.043 } 00:13:54.043 ] 00:13:54.043 }' 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.043 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.612 [2024-11-15 10:42:15.555809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.612 10:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:54.872 [2024-11-15 10:42:15.935522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:54.872 /dev/nbd0 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.872 1+0 records in 00:13:54.872 1+0 records out 00:13:54.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357072 s, 11.5 MB/s 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:54.872 10:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:04.846 65536+0 records in 00:14:04.846 65536+0 records out 00:14:04.846 33554432 bytes (34 MB, 32 MiB) copied, 8.85851 s, 3.8 MB/s 00:14:04.846 10:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:04.846 10:42:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.846 10:42:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:04.846 10:42:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:04.846 10:42:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:04.846 10:42:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.846 10:42:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:04.846 [2024-11-15 10:42:25.175973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.846 [2024-11-15 10:42:25.186590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.846 "name": "raid_bdev1", 00:14:04.846 "uuid": "5df723eb-1a31-488d-b0e6-14472bfbcbe1", 00:14:04.846 "strip_size_kb": 0, 00:14:04.846 "state": "online", 00:14:04.846 "raid_level": "raid1", 00:14:04.846 "superblock": false, 00:14:04.846 "num_base_bdevs": 4, 00:14:04.846 "num_base_bdevs_discovered": 3, 00:14:04.846 "num_base_bdevs_operational": 3, 00:14:04.846 "base_bdevs_list": [ 00:14:04.846 { 00:14:04.846 "name": null, 00:14:04.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.846 "is_configured": false, 00:14:04.846 "data_offset": 0, 00:14:04.846 "data_size": 65536 00:14:04.846 }, 00:14:04.846 { 00:14:04.846 "name": "BaseBdev2", 00:14:04.846 "uuid": "dfcc8c2c-5119-5556-ac0f-f6a016169d67", 00:14:04.846 "is_configured": true, 00:14:04.846 "data_offset": 0, 00:14:04.846 "data_size": 65536 00:14:04.846 }, 00:14:04.846 { 00:14:04.846 "name": "BaseBdev3", 00:14:04.846 "uuid": "baca300a-32ef-56e9-b30d-7c8b20473ace", 00:14:04.846 "is_configured": true, 00:14:04.846 "data_offset": 0, 00:14:04.846 "data_size": 65536 00:14:04.846 }, 00:14:04.846 { 00:14:04.846 "name": "BaseBdev4", 00:14:04.846 "uuid": "8967d249-a173-58ce-b5ad-b45cfa3c868c", 00:14:04.846 "is_configured": true, 00:14:04.846 "data_offset": 0, 00:14:04.846 "data_size": 65536 00:14:04.846 } 00:14:04.846 ] 00:14:04.846 }' 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.846 [2024-11-15 10:42:25.710781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.846 [2024-11-15 10:42:25.725504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.846 10:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:04.846 [2024-11-15 10:42:25.728026] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.782 "name": "raid_bdev1", 00:14:05.782 "uuid": "5df723eb-1a31-488d-b0e6-14472bfbcbe1", 00:14:05.782 "strip_size_kb": 0, 00:14:05.782 "state": "online", 00:14:05.782 "raid_level": "raid1", 00:14:05.782 "superblock": false, 00:14:05.782 "num_base_bdevs": 4, 00:14:05.782 "num_base_bdevs_discovered": 4, 00:14:05.782 "num_base_bdevs_operational": 4, 00:14:05.782 "process": { 00:14:05.782 "type": "rebuild", 00:14:05.782 "target": "spare", 00:14:05.782 "progress": { 00:14:05.782 "blocks": 20480, 00:14:05.782 "percent": 31 00:14:05.782 } 00:14:05.782 }, 00:14:05.782 "base_bdevs_list": [ 00:14:05.782 { 00:14:05.782 "name": "spare", 00:14:05.782 "uuid": "8ab99142-f314-51e4-994a-41287fdae613", 00:14:05.782 "is_configured": true, 00:14:05.782 "data_offset": 0, 00:14:05.782 "data_size": 65536 00:14:05.782 }, 00:14:05.782 { 00:14:05.782 "name": "BaseBdev2", 00:14:05.782 "uuid": "dfcc8c2c-5119-5556-ac0f-f6a016169d67", 00:14:05.782 "is_configured": true, 00:14:05.782 "data_offset": 0, 00:14:05.782 "data_size": 65536 00:14:05.782 }, 00:14:05.782 { 00:14:05.782 "name": "BaseBdev3", 00:14:05.782 "uuid": "baca300a-32ef-56e9-b30d-7c8b20473ace", 00:14:05.782 "is_configured": true, 00:14:05.782 "data_offset": 0, 00:14:05.782 "data_size": 65536 00:14:05.782 }, 00:14:05.782 { 00:14:05.782 "name": "BaseBdev4", 00:14:05.782 "uuid": "8967d249-a173-58ce-b5ad-b45cfa3c868c", 00:14:05.782 "is_configured": true, 00:14:05.782 "data_offset": 0, 00:14:05.782 "data_size": 65536 00:14:05.782 } 00:14:05.782 ] 00:14:05.782 }' 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.782 10:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.782 [2024-11-15 10:42:26.897622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.782 [2024-11-15 10:42:26.937000] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:05.782 [2024-11-15 10:42:26.937079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.782 [2024-11-15 10:42:26.937104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.782 [2024-11-15 10:42:26.937119] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.041 10:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.041 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.041 "name": "raid_bdev1", 00:14:06.041 "uuid": "5df723eb-1a31-488d-b0e6-14472bfbcbe1", 00:14:06.041 "strip_size_kb": 0, 00:14:06.041 "state": "online", 00:14:06.041 "raid_level": "raid1", 00:14:06.041 "superblock": false, 00:14:06.041 "num_base_bdevs": 4, 00:14:06.041 "num_base_bdevs_discovered": 3, 00:14:06.041 "num_base_bdevs_operational": 3, 00:14:06.041 "base_bdevs_list": [ 00:14:06.041 { 00:14:06.041 "name": null, 00:14:06.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.041 "is_configured": false, 00:14:06.041 "data_offset": 0, 00:14:06.041 "data_size": 65536 00:14:06.041 }, 00:14:06.041 { 00:14:06.041 "name": "BaseBdev2", 00:14:06.041 "uuid": "dfcc8c2c-5119-5556-ac0f-f6a016169d67", 00:14:06.041 "is_configured": true, 00:14:06.041 "data_offset": 0, 00:14:06.041 "data_size": 65536 00:14:06.041 }, 00:14:06.041 { 00:14:06.041 "name": "BaseBdev3", 00:14:06.041 "uuid": "baca300a-32ef-56e9-b30d-7c8b20473ace", 00:14:06.041 "is_configured": true, 00:14:06.041 "data_offset": 0, 00:14:06.041 "data_size": 65536 00:14:06.041 }, 00:14:06.041 { 00:14:06.041 "name": "BaseBdev4", 00:14:06.041 "uuid": "8967d249-a173-58ce-b5ad-b45cfa3c868c", 00:14:06.041 "is_configured": true, 00:14:06.041 "data_offset": 0, 00:14:06.041 "data_size": 65536 00:14:06.041 } 00:14:06.041 ] 00:14:06.041 }' 00:14:06.041 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.041 10:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.608 "name": "raid_bdev1", 00:14:06.608 "uuid": "5df723eb-1a31-488d-b0e6-14472bfbcbe1", 00:14:06.608 "strip_size_kb": 0, 00:14:06.608 "state": "online", 00:14:06.608 "raid_level": "raid1", 00:14:06.608 "superblock": false, 00:14:06.608 "num_base_bdevs": 4, 00:14:06.608 "num_base_bdevs_discovered": 3, 00:14:06.608 "num_base_bdevs_operational": 3, 00:14:06.608 "base_bdevs_list": [ 00:14:06.608 { 00:14:06.608 "name": null, 00:14:06.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.608 "is_configured": false, 00:14:06.608 "data_offset": 0, 00:14:06.608 "data_size": 65536 00:14:06.608 }, 00:14:06.608 { 00:14:06.608 "name": "BaseBdev2", 00:14:06.608 "uuid": "dfcc8c2c-5119-5556-ac0f-f6a016169d67", 00:14:06.608 "is_configured": true, 00:14:06.608 "data_offset": 0, 00:14:06.608 "data_size": 65536 00:14:06.608 }, 00:14:06.608 { 00:14:06.608 "name": "BaseBdev3", 00:14:06.608 "uuid": "baca300a-32ef-56e9-b30d-7c8b20473ace", 00:14:06.608 "is_configured": true, 00:14:06.608 "data_offset": 0, 00:14:06.608 "data_size": 65536 00:14:06.608 }, 00:14:06.608 { 00:14:06.608 "name": "BaseBdev4", 00:14:06.608 "uuid": "8967d249-a173-58ce-b5ad-b45cfa3c868c", 00:14:06.608 "is_configured": true, 00:14:06.608 "data_offset": 0, 00:14:06.608 "data_size": 65536 00:14:06.608 } 00:14:06.608 ] 00:14:06.608 }' 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.608 [2024-11-15 10:42:27.641139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.608 [2024-11-15 10:42:27.654924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.608 10:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:06.608 [2024-11-15 10:42:27.657868] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:07.660 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.660 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.660 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.660 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.660 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.660 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.660 10:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.660 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.660 10:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.660 10:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.660 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.660 "name": "raid_bdev1", 00:14:07.660 "uuid": "5df723eb-1a31-488d-b0e6-14472bfbcbe1", 00:14:07.660 "strip_size_kb": 0, 00:14:07.660 "state": "online", 00:14:07.660 "raid_level": "raid1", 00:14:07.660 "superblock": false, 00:14:07.660 "num_base_bdevs": 4, 00:14:07.660 "num_base_bdevs_discovered": 4, 00:14:07.660 "num_base_bdevs_operational": 4, 00:14:07.660 "process": { 00:14:07.660 "type": "rebuild", 00:14:07.660 "target": "spare", 00:14:07.660 "progress": { 00:14:07.660 "blocks": 20480, 00:14:07.660 "percent": 31 00:14:07.660 } 00:14:07.660 }, 00:14:07.660 "base_bdevs_list": [ 00:14:07.660 { 00:14:07.660 "name": "spare", 00:14:07.660 "uuid": "8ab99142-f314-51e4-994a-41287fdae613", 00:14:07.660 "is_configured": true, 00:14:07.660 "data_offset": 0, 00:14:07.660 "data_size": 65536 00:14:07.660 }, 00:14:07.660 { 00:14:07.660 "name": "BaseBdev2", 00:14:07.660 "uuid": "dfcc8c2c-5119-5556-ac0f-f6a016169d67", 00:14:07.660 "is_configured": true, 00:14:07.660 "data_offset": 0, 00:14:07.660 "data_size": 65536 00:14:07.660 }, 00:14:07.660 { 00:14:07.660 "name": "BaseBdev3", 00:14:07.660 "uuid": "baca300a-32ef-56e9-b30d-7c8b20473ace", 00:14:07.660 "is_configured": true, 00:14:07.660 "data_offset": 0, 00:14:07.660 "data_size": 65536 00:14:07.660 }, 00:14:07.660 { 00:14:07.660 "name": "BaseBdev4", 00:14:07.660 "uuid": "8967d249-a173-58ce-b5ad-b45cfa3c868c", 00:14:07.660 "is_configured": true, 00:14:07.660 "data_offset": 0, 00:14:07.660 "data_size": 65536 00:14:07.660 } 00:14:07.660 ] 00:14:07.660 }' 00:14:07.660 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.660 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.660 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.919 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.919 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:07.919 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:07.919 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:07.919 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:07.919 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:07.919 10:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.919 10:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.919 [2024-11-15 10:42:28.843376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:07.919 [2024-11-15 10:42:28.866490] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:07.919 10:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.919 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.920 "name": "raid_bdev1", 00:14:07.920 "uuid": "5df723eb-1a31-488d-b0e6-14472bfbcbe1", 00:14:07.920 "strip_size_kb": 0, 00:14:07.920 "state": "online", 00:14:07.920 "raid_level": "raid1", 00:14:07.920 "superblock": false, 00:14:07.920 "num_base_bdevs": 4, 00:14:07.920 "num_base_bdevs_discovered": 3, 00:14:07.920 "num_base_bdevs_operational": 3, 00:14:07.920 "process": { 00:14:07.920 "type": "rebuild", 00:14:07.920 "target": "spare", 00:14:07.920 "progress": { 00:14:07.920 "blocks": 24576, 00:14:07.920 "percent": 37 00:14:07.920 } 00:14:07.920 }, 00:14:07.920 "base_bdevs_list": [ 00:14:07.920 { 00:14:07.920 "name": "spare", 00:14:07.920 "uuid": "8ab99142-f314-51e4-994a-41287fdae613", 00:14:07.920 "is_configured": true, 00:14:07.920 "data_offset": 0, 00:14:07.920 "data_size": 65536 00:14:07.920 }, 00:14:07.920 { 00:14:07.920 "name": null, 00:14:07.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.920 "is_configured": false, 00:14:07.920 "data_offset": 0, 00:14:07.920 "data_size": 65536 00:14:07.920 }, 00:14:07.920 { 00:14:07.920 "name": "BaseBdev3", 00:14:07.920 "uuid": "baca300a-32ef-56e9-b30d-7c8b20473ace", 00:14:07.920 "is_configured": true, 00:14:07.920 "data_offset": 0, 00:14:07.920 "data_size": 65536 00:14:07.920 }, 00:14:07.920 { 00:14:07.920 "name": "BaseBdev4", 00:14:07.920 "uuid": "8967d249-a173-58ce-b5ad-b45cfa3c868c", 00:14:07.920 "is_configured": true, 00:14:07.920 "data_offset": 0, 00:14:07.920 "data_size": 65536 00:14:07.920 } 00:14:07.920 ] 00:14:07.920 }' 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.920 10:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=478 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.920 "name": "raid_bdev1", 00:14:07.920 "uuid": "5df723eb-1a31-488d-b0e6-14472bfbcbe1", 00:14:07.920 "strip_size_kb": 0, 00:14:07.920 "state": "online", 00:14:07.920 "raid_level": "raid1", 00:14:07.920 "superblock": false, 00:14:07.920 "num_base_bdevs": 4, 00:14:07.920 "num_base_bdevs_discovered": 3, 00:14:07.920 "num_base_bdevs_operational": 3, 00:14:07.920 "process": { 00:14:07.920 "type": "rebuild", 00:14:07.920 "target": "spare", 00:14:07.920 "progress": { 00:14:07.920 "blocks": 26624, 00:14:07.920 "percent": 40 00:14:07.920 } 00:14:07.920 }, 00:14:07.920 "base_bdevs_list": [ 00:14:07.920 { 00:14:07.920 "name": "spare", 00:14:07.920 "uuid": "8ab99142-f314-51e4-994a-41287fdae613", 00:14:07.920 "is_configured": true, 00:14:07.920 "data_offset": 0, 00:14:07.920 "data_size": 65536 00:14:07.920 }, 00:14:07.920 { 00:14:07.920 "name": null, 00:14:07.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.920 "is_configured": false, 00:14:07.920 "data_offset": 0, 00:14:07.920 "data_size": 65536 00:14:07.920 }, 00:14:07.920 { 00:14:07.920 "name": "BaseBdev3", 00:14:07.920 "uuid": "baca300a-32ef-56e9-b30d-7c8b20473ace", 00:14:07.920 "is_configured": true, 00:14:07.920 "data_offset": 0, 00:14:07.920 "data_size": 65536 00:14:07.920 }, 00:14:07.920 { 00:14:07.920 "name": "BaseBdev4", 00:14:07.920 "uuid": "8967d249-a173-58ce-b5ad-b45cfa3c868c", 00:14:07.920 "is_configured": true, 00:14:07.920 "data_offset": 0, 00:14:07.920 "data_size": 65536 00:14:07.920 } 00:14:07.920 ] 00:14:07.920 }' 00:14:07.920 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.178 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.178 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.178 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.178 10:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.114 10:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:09.114 10:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.114 10:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.114 10:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.114 10:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.114 10:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.114 10:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.114 10:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.114 10:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.114 10:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.114 10:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.114 10:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.114 "name": "raid_bdev1", 00:14:09.114 "uuid": "5df723eb-1a31-488d-b0e6-14472bfbcbe1", 00:14:09.114 "strip_size_kb": 0, 00:14:09.114 "state": "online", 00:14:09.114 "raid_level": "raid1", 00:14:09.114 "superblock": false, 00:14:09.114 "num_base_bdevs": 4, 00:14:09.114 "num_base_bdevs_discovered": 3, 00:14:09.114 "num_base_bdevs_operational": 3, 00:14:09.114 "process": { 00:14:09.114 "type": "rebuild", 00:14:09.114 "target": "spare", 00:14:09.114 "progress": { 00:14:09.114 "blocks": 51200, 00:14:09.114 "percent": 78 00:14:09.114 } 00:14:09.114 }, 00:14:09.114 "base_bdevs_list": [ 00:14:09.114 { 00:14:09.114 "name": "spare", 00:14:09.114 "uuid": "8ab99142-f314-51e4-994a-41287fdae613", 00:14:09.114 "is_configured": true, 00:14:09.114 "data_offset": 0, 00:14:09.114 "data_size": 65536 00:14:09.114 }, 00:14:09.114 { 00:14:09.114 "name": null, 00:14:09.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.114 "is_configured": false, 00:14:09.114 "data_offset": 0, 00:14:09.114 "data_size": 65536 00:14:09.114 }, 00:14:09.114 { 00:14:09.114 "name": "BaseBdev3", 00:14:09.115 "uuid": "baca300a-32ef-56e9-b30d-7c8b20473ace", 00:14:09.115 "is_configured": true, 00:14:09.115 "data_offset": 0, 00:14:09.115 "data_size": 65536 00:14:09.115 }, 00:14:09.115 { 00:14:09.115 "name": "BaseBdev4", 00:14:09.115 "uuid": "8967d249-a173-58ce-b5ad-b45cfa3c868c", 00:14:09.115 "is_configured": true, 00:14:09.115 "data_offset": 0, 00:14:09.115 "data_size": 65536 00:14:09.115 } 00:14:09.115 ] 00:14:09.115 }' 00:14:09.115 10:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.373 10:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.373 10:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.373 10:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.373 10:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.941 [2024-11-15 10:42:30.880373] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:09.941 [2024-11-15 10:42:30.880486] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:09.941 [2024-11-15 10:42:30.880601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.200 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.200 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.200 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.200 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.200 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.200 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.200 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.200 10:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.200 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.200 10:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.458 "name": "raid_bdev1", 00:14:10.458 "uuid": "5df723eb-1a31-488d-b0e6-14472bfbcbe1", 00:14:10.458 "strip_size_kb": 0, 00:14:10.458 "state": "online", 00:14:10.458 "raid_level": "raid1", 00:14:10.458 "superblock": false, 00:14:10.458 "num_base_bdevs": 4, 00:14:10.458 "num_base_bdevs_discovered": 3, 00:14:10.458 "num_base_bdevs_operational": 3, 00:14:10.458 "base_bdevs_list": [ 00:14:10.458 { 00:14:10.458 "name": "spare", 00:14:10.458 "uuid": "8ab99142-f314-51e4-994a-41287fdae613", 00:14:10.458 "is_configured": true, 00:14:10.458 "data_offset": 0, 00:14:10.458 "data_size": 65536 00:14:10.458 }, 00:14:10.458 { 00:14:10.458 "name": null, 00:14:10.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.458 "is_configured": false, 00:14:10.458 "data_offset": 0, 00:14:10.458 "data_size": 65536 00:14:10.458 }, 00:14:10.458 { 00:14:10.458 "name": "BaseBdev3", 00:14:10.458 "uuid": "baca300a-32ef-56e9-b30d-7c8b20473ace", 00:14:10.458 "is_configured": true, 00:14:10.458 "data_offset": 0, 00:14:10.458 "data_size": 65536 00:14:10.458 }, 00:14:10.458 { 00:14:10.458 "name": "BaseBdev4", 00:14:10.458 "uuid": "8967d249-a173-58ce-b5ad-b45cfa3c868c", 00:14:10.458 "is_configured": true, 00:14:10.458 "data_offset": 0, 00:14:10.458 "data_size": 65536 00:14:10.458 } 00:14:10.458 ] 00:14:10.458 }' 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.458 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.458 "name": "raid_bdev1", 00:14:10.458 "uuid": "5df723eb-1a31-488d-b0e6-14472bfbcbe1", 00:14:10.458 "strip_size_kb": 0, 00:14:10.458 "state": "online", 00:14:10.458 "raid_level": "raid1", 00:14:10.458 "superblock": false, 00:14:10.458 "num_base_bdevs": 4, 00:14:10.458 "num_base_bdevs_discovered": 3, 00:14:10.458 "num_base_bdevs_operational": 3, 00:14:10.458 "base_bdevs_list": [ 00:14:10.458 { 00:14:10.458 "name": "spare", 00:14:10.458 "uuid": "8ab99142-f314-51e4-994a-41287fdae613", 00:14:10.458 "is_configured": true, 00:14:10.458 "data_offset": 0, 00:14:10.458 "data_size": 65536 00:14:10.458 }, 00:14:10.458 { 00:14:10.458 "name": null, 00:14:10.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.458 "is_configured": false, 00:14:10.459 "data_offset": 0, 00:14:10.459 "data_size": 65536 00:14:10.459 }, 00:14:10.459 { 00:14:10.459 "name": "BaseBdev3", 00:14:10.459 "uuid": "baca300a-32ef-56e9-b30d-7c8b20473ace", 00:14:10.459 "is_configured": true, 00:14:10.459 "data_offset": 0, 00:14:10.459 "data_size": 65536 00:14:10.459 }, 00:14:10.459 { 00:14:10.459 "name": "BaseBdev4", 00:14:10.459 "uuid": "8967d249-a173-58ce-b5ad-b45cfa3c868c", 00:14:10.459 "is_configured": true, 00:14:10.459 "data_offset": 0, 00:14:10.459 "data_size": 65536 00:14:10.459 } 00:14:10.459 ] 00:14:10.459 }' 00:14:10.459 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.459 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.459 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.717 "name": "raid_bdev1", 00:14:10.717 "uuid": "5df723eb-1a31-488d-b0e6-14472bfbcbe1", 00:14:10.717 "strip_size_kb": 0, 00:14:10.717 "state": "online", 00:14:10.717 "raid_level": "raid1", 00:14:10.717 "superblock": false, 00:14:10.717 "num_base_bdevs": 4, 00:14:10.717 "num_base_bdevs_discovered": 3, 00:14:10.717 "num_base_bdevs_operational": 3, 00:14:10.717 "base_bdevs_list": [ 00:14:10.717 { 00:14:10.717 "name": "spare", 00:14:10.717 "uuid": "8ab99142-f314-51e4-994a-41287fdae613", 00:14:10.717 "is_configured": true, 00:14:10.717 "data_offset": 0, 00:14:10.717 "data_size": 65536 00:14:10.717 }, 00:14:10.717 { 00:14:10.717 "name": null, 00:14:10.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.717 "is_configured": false, 00:14:10.717 "data_offset": 0, 00:14:10.717 "data_size": 65536 00:14:10.717 }, 00:14:10.717 { 00:14:10.717 "name": "BaseBdev3", 00:14:10.717 "uuid": "baca300a-32ef-56e9-b30d-7c8b20473ace", 00:14:10.717 "is_configured": true, 00:14:10.717 "data_offset": 0, 00:14:10.717 "data_size": 65536 00:14:10.717 }, 00:14:10.717 { 00:14:10.717 "name": "BaseBdev4", 00:14:10.717 "uuid": "8967d249-a173-58ce-b5ad-b45cfa3c868c", 00:14:10.717 "is_configured": true, 00:14:10.717 "data_offset": 0, 00:14:10.717 "data_size": 65536 00:14:10.717 } 00:14:10.717 ] 00:14:10.717 }' 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.717 10:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.975 10:42:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:10.975 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.975 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.234 [2024-11-15 10:42:32.140406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.234 [2024-11-15 10:42:32.140645] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.234 [2024-11-15 10:42:32.140787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.234 [2024-11-15 10:42:32.140906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.234 [2024-11-15 10:42:32.140924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:11.234 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:11.492 /dev/nbd0 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.492 1+0 records in 00:14:11.492 1+0 records out 00:14:11.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236086 s, 17.3 MB/s 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:11.492 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:11.751 /dev/nbd1 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.751 1+0 records in 00:14:11.751 1+0 records out 00:14:11.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338825 s, 12.1 MB/s 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:11.751 10:42:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:12.009 10:42:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:12.009 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.009 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:12.009 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:12.009 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:12.009 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.009 10:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:12.267 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:12.267 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:12.267 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:12.267 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.267 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.267 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:12.267 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:12.267 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.267 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.267 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77737 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77737 ']' 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77737 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77737 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.526 killing process with pid 77737 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77737' 00:14:12.526 Received shutdown signal, test time was about 60.000000 seconds 00:14:12.526 00:14:12.526 Latency(us) 00:14:12.526 [2024-11-15T10:42:33.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.526 [2024-11-15T10:42:33.688Z] =================================================================================================================== 00:14:12.526 [2024-11-15T10:42:33.688Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77737 00:14:12.526 [2024-11-15 10:42:33.625284] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.526 10:42:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77737 00:14:13.092 [2024-11-15 10:42:34.051728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:14.031 00:14:14.031 real 0m21.460s 00:14:14.031 user 0m24.250s 00:14:14.031 sys 0m3.824s 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.031 ************************************ 00:14:14.031 END TEST raid_rebuild_test 00:14:14.031 ************************************ 00:14:14.031 10:42:35 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:14.031 10:42:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:14.031 10:42:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.031 10:42:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:14.031 ************************************ 00:14:14.031 START TEST raid_rebuild_test_sb 00:14:14.031 ************************************ 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78223 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78223 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78223 ']' 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.031 10:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.289 [2024-11-15 10:42:35.243857] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:14:14.289 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:14.289 Zero copy mechanism will not be used. 00:14:14.289 [2024-11-15 10:42:35.244631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78223 ] 00:14:14.289 [2024-11-15 10:42:35.421851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.547 [2024-11-15 10:42:35.549059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.804 [2024-11-15 10:42:35.751287] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.804 [2024-11-15 10:42:35.751330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.062 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.062 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:15.062 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:15.062 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:15.062 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.062 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.321 BaseBdev1_malloc 00:14:15.321 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.321 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:15.321 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.321 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.321 [2024-11-15 10:42:36.242598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:15.321 [2024-11-15 10:42:36.242676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.321 [2024-11-15 10:42:36.242710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:15.321 [2024-11-15 10:42:36.242730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.321 [2024-11-15 10:42:36.245418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.322 [2024-11-15 10:42:36.245466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:15.322 BaseBdev1 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.322 BaseBdev2_malloc 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.322 [2024-11-15 10:42:36.294240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:15.322 [2024-11-15 10:42:36.294315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.322 [2024-11-15 10:42:36.294343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:15.322 [2024-11-15 10:42:36.294364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.322 [2024-11-15 10:42:36.297091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.322 [2024-11-15 10:42:36.297134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:15.322 BaseBdev2 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.322 BaseBdev3_malloc 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.322 [2024-11-15 10:42:36.353825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:15.322 [2024-11-15 10:42:36.353888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.322 [2024-11-15 10:42:36.353919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:15.322 [2024-11-15 10:42:36.353939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.322 [2024-11-15 10:42:36.356699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.322 [2024-11-15 10:42:36.356741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:15.322 BaseBdev3 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.322 BaseBdev4_malloc 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.322 [2024-11-15 10:42:36.405596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:15.322 [2024-11-15 10:42:36.405662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.322 [2024-11-15 10:42:36.405691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:15.322 [2024-11-15 10:42:36.405709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.322 [2024-11-15 10:42:36.408360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.322 [2024-11-15 10:42:36.408408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:15.322 BaseBdev4 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.322 spare_malloc 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.322 spare_delay 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.322 [2024-11-15 10:42:36.465432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:15.322 [2024-11-15 10:42:36.465513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.322 [2024-11-15 10:42:36.465545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:15.322 [2024-11-15 10:42:36.465564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.322 [2024-11-15 10:42:36.468259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.322 [2024-11-15 10:42:36.468305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:15.322 spare 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.322 [2024-11-15 10:42:36.473509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.322 [2024-11-15 10:42:36.475850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.322 [2024-11-15 10:42:36.475949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.322 [2024-11-15 10:42:36.476029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:15.322 [2024-11-15 10:42:36.476267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:15.322 [2024-11-15 10:42:36.476304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:15.322 [2024-11-15 10:42:36.476639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:15.322 [2024-11-15 10:42:36.476893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:15.322 [2024-11-15 10:42:36.476920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:15.322 [2024-11-15 10:42:36.477113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.322 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.580 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.580 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.580 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.580 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.580 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.580 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.580 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.580 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.580 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.580 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.580 "name": "raid_bdev1", 00:14:15.580 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:15.580 "strip_size_kb": 0, 00:14:15.580 "state": "online", 00:14:15.580 "raid_level": "raid1", 00:14:15.580 "superblock": true, 00:14:15.580 "num_base_bdevs": 4, 00:14:15.580 "num_base_bdevs_discovered": 4, 00:14:15.581 "num_base_bdevs_operational": 4, 00:14:15.581 "base_bdevs_list": [ 00:14:15.581 { 00:14:15.581 "name": "BaseBdev1", 00:14:15.581 "uuid": "f149b082-c1fb-5645-b711-b10d3aa32196", 00:14:15.581 "is_configured": true, 00:14:15.581 "data_offset": 2048, 00:14:15.581 "data_size": 63488 00:14:15.581 }, 00:14:15.581 { 00:14:15.581 "name": "BaseBdev2", 00:14:15.581 "uuid": "bc50d863-98a2-55de-9bdc-8dab3f884c92", 00:14:15.581 "is_configured": true, 00:14:15.581 "data_offset": 2048, 00:14:15.581 "data_size": 63488 00:14:15.581 }, 00:14:15.581 { 00:14:15.581 "name": "BaseBdev3", 00:14:15.581 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:15.581 "is_configured": true, 00:14:15.581 "data_offset": 2048, 00:14:15.581 "data_size": 63488 00:14:15.581 }, 00:14:15.581 { 00:14:15.581 "name": "BaseBdev4", 00:14:15.581 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:15.581 "is_configured": true, 00:14:15.581 "data_offset": 2048, 00:14:15.581 "data_size": 63488 00:14:15.581 } 00:14:15.581 ] 00:14:15.581 }' 00:14:15.581 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.581 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.838 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:15.838 10:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:15.838 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.838 10:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.838 [2024-11-15 10:42:36.974033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:16.099 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:16.358 [2024-11-15 10:42:37.305789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:16.358 /dev/nbd0 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:16.358 1+0 records in 00:14:16.358 1+0 records out 00:14:16.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027487 s, 14.9 MB/s 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:16.358 10:42:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:16.359 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:16.359 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:16.359 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:16.359 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:16.359 10:42:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:26.330 63488+0 records in 00:14:26.330 63488+0 records out 00:14:26.330 32505856 bytes (33 MB, 31 MiB) copied, 8.38126 s, 3.9 MB/s 00:14:26.330 10:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:26.330 10:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.330 10:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:26.330 10:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.330 10:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:26.330 10:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.330 10:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:26.330 [2024-11-15 10:42:45.985861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.330 [2024-11-15 10:42:46.014184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.330 "name": "raid_bdev1", 00:14:26.330 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:26.330 "strip_size_kb": 0, 00:14:26.330 "state": "online", 00:14:26.330 "raid_level": "raid1", 00:14:26.330 "superblock": true, 00:14:26.330 "num_base_bdevs": 4, 00:14:26.330 "num_base_bdevs_discovered": 3, 00:14:26.330 "num_base_bdevs_operational": 3, 00:14:26.330 "base_bdevs_list": [ 00:14:26.330 { 00:14:26.330 "name": null, 00:14:26.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.330 "is_configured": false, 00:14:26.330 "data_offset": 0, 00:14:26.330 "data_size": 63488 00:14:26.330 }, 00:14:26.330 { 00:14:26.330 "name": "BaseBdev2", 00:14:26.330 "uuid": "bc50d863-98a2-55de-9bdc-8dab3f884c92", 00:14:26.330 "is_configured": true, 00:14:26.330 "data_offset": 2048, 00:14:26.330 "data_size": 63488 00:14:26.330 }, 00:14:26.330 { 00:14:26.330 "name": "BaseBdev3", 00:14:26.330 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:26.330 "is_configured": true, 00:14:26.330 "data_offset": 2048, 00:14:26.330 "data_size": 63488 00:14:26.330 }, 00:14:26.330 { 00:14:26.330 "name": "BaseBdev4", 00:14:26.330 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:26.330 "is_configured": true, 00:14:26.330 "data_offset": 2048, 00:14:26.330 "data_size": 63488 00:14:26.330 } 00:14:26.330 ] 00:14:26.330 }' 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.330 [2024-11-15 10:42:46.482317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.330 [2024-11-15 10:42:46.496597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:26.330 10:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.331 10:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:26.331 [2024-11-15 10:42:46.499130] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.589 "name": "raid_bdev1", 00:14:26.589 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:26.589 "strip_size_kb": 0, 00:14:26.589 "state": "online", 00:14:26.589 "raid_level": "raid1", 00:14:26.589 "superblock": true, 00:14:26.589 "num_base_bdevs": 4, 00:14:26.589 "num_base_bdevs_discovered": 4, 00:14:26.589 "num_base_bdevs_operational": 4, 00:14:26.589 "process": { 00:14:26.589 "type": "rebuild", 00:14:26.589 "target": "spare", 00:14:26.589 "progress": { 00:14:26.589 "blocks": 20480, 00:14:26.589 "percent": 32 00:14:26.589 } 00:14:26.589 }, 00:14:26.589 "base_bdevs_list": [ 00:14:26.589 { 00:14:26.589 "name": "spare", 00:14:26.589 "uuid": "f6ac45cf-3e4b-5c22-b08e-ca6dbeb6488e", 00:14:26.589 "is_configured": true, 00:14:26.589 "data_offset": 2048, 00:14:26.589 "data_size": 63488 00:14:26.589 }, 00:14:26.589 { 00:14:26.589 "name": "BaseBdev2", 00:14:26.589 "uuid": "bc50d863-98a2-55de-9bdc-8dab3f884c92", 00:14:26.589 "is_configured": true, 00:14:26.589 "data_offset": 2048, 00:14:26.589 "data_size": 63488 00:14:26.589 }, 00:14:26.589 { 00:14:26.589 "name": "BaseBdev3", 00:14:26.589 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:26.589 "is_configured": true, 00:14:26.589 "data_offset": 2048, 00:14:26.589 "data_size": 63488 00:14:26.589 }, 00:14:26.589 { 00:14:26.589 "name": "BaseBdev4", 00:14:26.589 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:26.589 "is_configured": true, 00:14:26.589 "data_offset": 2048, 00:14:26.589 "data_size": 63488 00:14:26.589 } 00:14:26.589 ] 00:14:26.589 }' 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.589 [2024-11-15 10:42:47.664573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.589 [2024-11-15 10:42:47.707783] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:26.589 [2024-11-15 10:42:47.707871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.589 [2024-11-15 10:42:47.707897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.589 [2024-11-15 10:42:47.707914] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.589 10:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.873 10:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.873 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.873 "name": "raid_bdev1", 00:14:26.873 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:26.873 "strip_size_kb": 0, 00:14:26.873 "state": "online", 00:14:26.873 "raid_level": "raid1", 00:14:26.873 "superblock": true, 00:14:26.873 "num_base_bdevs": 4, 00:14:26.873 "num_base_bdevs_discovered": 3, 00:14:26.873 "num_base_bdevs_operational": 3, 00:14:26.873 "base_bdevs_list": [ 00:14:26.873 { 00:14:26.873 "name": null, 00:14:26.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.873 "is_configured": false, 00:14:26.873 "data_offset": 0, 00:14:26.873 "data_size": 63488 00:14:26.873 }, 00:14:26.873 { 00:14:26.873 "name": "BaseBdev2", 00:14:26.873 "uuid": "bc50d863-98a2-55de-9bdc-8dab3f884c92", 00:14:26.873 "is_configured": true, 00:14:26.873 "data_offset": 2048, 00:14:26.873 "data_size": 63488 00:14:26.873 }, 00:14:26.873 { 00:14:26.873 "name": "BaseBdev3", 00:14:26.873 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:26.873 "is_configured": true, 00:14:26.873 "data_offset": 2048, 00:14:26.873 "data_size": 63488 00:14:26.873 }, 00:14:26.873 { 00:14:26.873 "name": "BaseBdev4", 00:14:26.873 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:26.873 "is_configured": true, 00:14:26.873 "data_offset": 2048, 00:14:26.873 "data_size": 63488 00:14:26.873 } 00:14:26.873 ] 00:14:26.873 }' 00:14:26.873 10:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.873 10:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.132 10:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.132 10:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.132 10:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.132 10:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.132 10:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.132 10:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.132 10:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.132 10:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.132 10:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.132 10:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.132 10:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.132 "name": "raid_bdev1", 00:14:27.132 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:27.132 "strip_size_kb": 0, 00:14:27.132 "state": "online", 00:14:27.132 "raid_level": "raid1", 00:14:27.132 "superblock": true, 00:14:27.132 "num_base_bdevs": 4, 00:14:27.132 "num_base_bdevs_discovered": 3, 00:14:27.132 "num_base_bdevs_operational": 3, 00:14:27.132 "base_bdevs_list": [ 00:14:27.132 { 00:14:27.132 "name": null, 00:14:27.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.132 "is_configured": false, 00:14:27.132 "data_offset": 0, 00:14:27.132 "data_size": 63488 00:14:27.132 }, 00:14:27.132 { 00:14:27.132 "name": "BaseBdev2", 00:14:27.132 "uuid": "bc50d863-98a2-55de-9bdc-8dab3f884c92", 00:14:27.132 "is_configured": true, 00:14:27.132 "data_offset": 2048, 00:14:27.132 "data_size": 63488 00:14:27.132 }, 00:14:27.132 { 00:14:27.132 "name": "BaseBdev3", 00:14:27.132 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:27.132 "is_configured": true, 00:14:27.132 "data_offset": 2048, 00:14:27.132 "data_size": 63488 00:14:27.132 }, 00:14:27.132 { 00:14:27.132 "name": "BaseBdev4", 00:14:27.132 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:27.132 "is_configured": true, 00:14:27.132 "data_offset": 2048, 00:14:27.132 "data_size": 63488 00:14:27.132 } 00:14:27.132 ] 00:14:27.132 }' 00:14:27.132 10:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.390 10:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.390 10:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.390 10:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.390 10:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:27.390 10:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.390 10:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.390 [2024-11-15 10:42:48.383673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.390 [2024-11-15 10:42:48.397192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:27.390 10:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.390 10:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:27.390 [2024-11-15 10:42:48.399783] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.324 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.324 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.324 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.324 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.324 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.324 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.324 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.324 10:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.324 10:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.324 10:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.324 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.324 "name": "raid_bdev1", 00:14:28.324 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:28.324 "strip_size_kb": 0, 00:14:28.324 "state": "online", 00:14:28.324 "raid_level": "raid1", 00:14:28.324 "superblock": true, 00:14:28.324 "num_base_bdevs": 4, 00:14:28.324 "num_base_bdevs_discovered": 4, 00:14:28.324 "num_base_bdevs_operational": 4, 00:14:28.324 "process": { 00:14:28.324 "type": "rebuild", 00:14:28.324 "target": "spare", 00:14:28.324 "progress": { 00:14:28.324 "blocks": 20480, 00:14:28.324 "percent": 32 00:14:28.324 } 00:14:28.324 }, 00:14:28.324 "base_bdevs_list": [ 00:14:28.324 { 00:14:28.324 "name": "spare", 00:14:28.324 "uuid": "f6ac45cf-3e4b-5c22-b08e-ca6dbeb6488e", 00:14:28.324 "is_configured": true, 00:14:28.324 "data_offset": 2048, 00:14:28.324 "data_size": 63488 00:14:28.324 }, 00:14:28.324 { 00:14:28.324 "name": "BaseBdev2", 00:14:28.324 "uuid": "bc50d863-98a2-55de-9bdc-8dab3f884c92", 00:14:28.324 "is_configured": true, 00:14:28.324 "data_offset": 2048, 00:14:28.324 "data_size": 63488 00:14:28.324 }, 00:14:28.324 { 00:14:28.324 "name": "BaseBdev3", 00:14:28.324 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:28.324 "is_configured": true, 00:14:28.324 "data_offset": 2048, 00:14:28.324 "data_size": 63488 00:14:28.324 }, 00:14:28.324 { 00:14:28.324 "name": "BaseBdev4", 00:14:28.324 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:28.324 "is_configured": true, 00:14:28.324 "data_offset": 2048, 00:14:28.324 "data_size": 63488 00:14:28.324 } 00:14:28.324 ] 00:14:28.324 }' 00:14:28.324 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:28.582 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.582 [2024-11-15 10:42:49.553283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:28.582 [2024-11-15 10:42:49.708691] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.582 10:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.841 "name": "raid_bdev1", 00:14:28.841 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:28.841 "strip_size_kb": 0, 00:14:28.841 "state": "online", 00:14:28.841 "raid_level": "raid1", 00:14:28.841 "superblock": true, 00:14:28.841 "num_base_bdevs": 4, 00:14:28.841 "num_base_bdevs_discovered": 3, 00:14:28.841 "num_base_bdevs_operational": 3, 00:14:28.841 "process": { 00:14:28.841 "type": "rebuild", 00:14:28.841 "target": "spare", 00:14:28.841 "progress": { 00:14:28.841 "blocks": 24576, 00:14:28.841 "percent": 38 00:14:28.841 } 00:14:28.841 }, 00:14:28.841 "base_bdevs_list": [ 00:14:28.841 { 00:14:28.841 "name": "spare", 00:14:28.841 "uuid": "f6ac45cf-3e4b-5c22-b08e-ca6dbeb6488e", 00:14:28.841 "is_configured": true, 00:14:28.841 "data_offset": 2048, 00:14:28.841 "data_size": 63488 00:14:28.841 }, 00:14:28.841 { 00:14:28.841 "name": null, 00:14:28.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.841 "is_configured": false, 00:14:28.841 "data_offset": 0, 00:14:28.841 "data_size": 63488 00:14:28.841 }, 00:14:28.841 { 00:14:28.841 "name": "BaseBdev3", 00:14:28.841 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:28.841 "is_configured": true, 00:14:28.841 "data_offset": 2048, 00:14:28.841 "data_size": 63488 00:14:28.841 }, 00:14:28.841 { 00:14:28.841 "name": "BaseBdev4", 00:14:28.841 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:28.841 "is_configured": true, 00:14:28.841 "data_offset": 2048, 00:14:28.841 "data_size": 63488 00:14:28.841 } 00:14:28.841 ] 00:14:28.841 }' 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=498 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.841 "name": "raid_bdev1", 00:14:28.841 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:28.841 "strip_size_kb": 0, 00:14:28.841 "state": "online", 00:14:28.841 "raid_level": "raid1", 00:14:28.841 "superblock": true, 00:14:28.841 "num_base_bdevs": 4, 00:14:28.841 "num_base_bdevs_discovered": 3, 00:14:28.841 "num_base_bdevs_operational": 3, 00:14:28.841 "process": { 00:14:28.841 "type": "rebuild", 00:14:28.841 "target": "spare", 00:14:28.841 "progress": { 00:14:28.841 "blocks": 26624, 00:14:28.841 "percent": 41 00:14:28.841 } 00:14:28.841 }, 00:14:28.841 "base_bdevs_list": [ 00:14:28.841 { 00:14:28.841 "name": "spare", 00:14:28.841 "uuid": "f6ac45cf-3e4b-5c22-b08e-ca6dbeb6488e", 00:14:28.841 "is_configured": true, 00:14:28.841 "data_offset": 2048, 00:14:28.841 "data_size": 63488 00:14:28.841 }, 00:14:28.841 { 00:14:28.841 "name": null, 00:14:28.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.841 "is_configured": false, 00:14:28.841 "data_offset": 0, 00:14:28.841 "data_size": 63488 00:14:28.841 }, 00:14:28.841 { 00:14:28.841 "name": "BaseBdev3", 00:14:28.841 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:28.841 "is_configured": true, 00:14:28.841 "data_offset": 2048, 00:14:28.841 "data_size": 63488 00:14:28.841 }, 00:14:28.841 { 00:14:28.841 "name": "BaseBdev4", 00:14:28.841 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:28.841 "is_configured": true, 00:14:28.841 "data_offset": 2048, 00:14:28.841 "data_size": 63488 00:14:28.841 } 00:14:28.841 ] 00:14:28.841 }' 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.841 10:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.098 10:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.098 10:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.032 10:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.032 10:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.032 10:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.032 10:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.032 10:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.032 10:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.032 10:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.032 10:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.032 10:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.032 10:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.032 10:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.032 10:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.032 "name": "raid_bdev1", 00:14:30.032 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:30.032 "strip_size_kb": 0, 00:14:30.032 "state": "online", 00:14:30.032 "raid_level": "raid1", 00:14:30.032 "superblock": true, 00:14:30.032 "num_base_bdevs": 4, 00:14:30.032 "num_base_bdevs_discovered": 3, 00:14:30.032 "num_base_bdevs_operational": 3, 00:14:30.032 "process": { 00:14:30.032 "type": "rebuild", 00:14:30.032 "target": "spare", 00:14:30.032 "progress": { 00:14:30.032 "blocks": 51200, 00:14:30.032 "percent": 80 00:14:30.032 } 00:14:30.032 }, 00:14:30.032 "base_bdevs_list": [ 00:14:30.032 { 00:14:30.032 "name": "spare", 00:14:30.032 "uuid": "f6ac45cf-3e4b-5c22-b08e-ca6dbeb6488e", 00:14:30.032 "is_configured": true, 00:14:30.032 "data_offset": 2048, 00:14:30.032 "data_size": 63488 00:14:30.032 }, 00:14:30.032 { 00:14:30.032 "name": null, 00:14:30.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.032 "is_configured": false, 00:14:30.032 "data_offset": 0, 00:14:30.032 "data_size": 63488 00:14:30.032 }, 00:14:30.032 { 00:14:30.032 "name": "BaseBdev3", 00:14:30.032 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:30.032 "is_configured": true, 00:14:30.032 "data_offset": 2048, 00:14:30.032 "data_size": 63488 00:14:30.032 }, 00:14:30.032 { 00:14:30.032 "name": "BaseBdev4", 00:14:30.032 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:30.032 "is_configured": true, 00:14:30.032 "data_offset": 2048, 00:14:30.032 "data_size": 63488 00:14:30.032 } 00:14:30.032 ] 00:14:30.032 }' 00:14:30.032 10:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.291 10:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.291 10:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.291 10:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.291 10:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.549 [2024-11-15 10:42:51.622798] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:30.549 [2024-11-15 10:42:51.622923] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:30.549 [2024-11-15 10:42:51.623106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.116 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.116 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.116 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.116 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.116 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.116 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.116 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.116 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.116 10:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.116 10:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.375 "name": "raid_bdev1", 00:14:31.375 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:31.375 "strip_size_kb": 0, 00:14:31.375 "state": "online", 00:14:31.375 "raid_level": "raid1", 00:14:31.375 "superblock": true, 00:14:31.375 "num_base_bdevs": 4, 00:14:31.375 "num_base_bdevs_discovered": 3, 00:14:31.375 "num_base_bdevs_operational": 3, 00:14:31.375 "base_bdevs_list": [ 00:14:31.375 { 00:14:31.375 "name": "spare", 00:14:31.375 "uuid": "f6ac45cf-3e4b-5c22-b08e-ca6dbeb6488e", 00:14:31.375 "is_configured": true, 00:14:31.375 "data_offset": 2048, 00:14:31.375 "data_size": 63488 00:14:31.375 }, 00:14:31.375 { 00:14:31.375 "name": null, 00:14:31.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.375 "is_configured": false, 00:14:31.375 "data_offset": 0, 00:14:31.375 "data_size": 63488 00:14:31.375 }, 00:14:31.375 { 00:14:31.375 "name": "BaseBdev3", 00:14:31.375 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:31.375 "is_configured": true, 00:14:31.375 "data_offset": 2048, 00:14:31.375 "data_size": 63488 00:14:31.375 }, 00:14:31.375 { 00:14:31.375 "name": "BaseBdev4", 00:14:31.375 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:31.375 "is_configured": true, 00:14:31.375 "data_offset": 2048, 00:14:31.375 "data_size": 63488 00:14:31.375 } 00:14:31.375 ] 00:14:31.375 }' 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.375 "name": "raid_bdev1", 00:14:31.375 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:31.375 "strip_size_kb": 0, 00:14:31.375 "state": "online", 00:14:31.375 "raid_level": "raid1", 00:14:31.375 "superblock": true, 00:14:31.375 "num_base_bdevs": 4, 00:14:31.375 "num_base_bdevs_discovered": 3, 00:14:31.375 "num_base_bdevs_operational": 3, 00:14:31.375 "base_bdevs_list": [ 00:14:31.375 { 00:14:31.375 "name": "spare", 00:14:31.375 "uuid": "f6ac45cf-3e4b-5c22-b08e-ca6dbeb6488e", 00:14:31.375 "is_configured": true, 00:14:31.375 "data_offset": 2048, 00:14:31.375 "data_size": 63488 00:14:31.375 }, 00:14:31.375 { 00:14:31.375 "name": null, 00:14:31.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.375 "is_configured": false, 00:14:31.375 "data_offset": 0, 00:14:31.375 "data_size": 63488 00:14:31.375 }, 00:14:31.375 { 00:14:31.375 "name": "BaseBdev3", 00:14:31.375 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:31.375 "is_configured": true, 00:14:31.375 "data_offset": 2048, 00:14:31.375 "data_size": 63488 00:14:31.375 }, 00:14:31.375 { 00:14:31.375 "name": "BaseBdev4", 00:14:31.375 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:31.375 "is_configured": true, 00:14:31.375 "data_offset": 2048, 00:14:31.375 "data_size": 63488 00:14:31.375 } 00:14:31.375 ] 00:14:31.375 }' 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.375 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.634 "name": "raid_bdev1", 00:14:31.634 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:31.634 "strip_size_kb": 0, 00:14:31.634 "state": "online", 00:14:31.634 "raid_level": "raid1", 00:14:31.634 "superblock": true, 00:14:31.634 "num_base_bdevs": 4, 00:14:31.634 "num_base_bdevs_discovered": 3, 00:14:31.634 "num_base_bdevs_operational": 3, 00:14:31.634 "base_bdevs_list": [ 00:14:31.634 { 00:14:31.634 "name": "spare", 00:14:31.634 "uuid": "f6ac45cf-3e4b-5c22-b08e-ca6dbeb6488e", 00:14:31.634 "is_configured": true, 00:14:31.634 "data_offset": 2048, 00:14:31.634 "data_size": 63488 00:14:31.634 }, 00:14:31.634 { 00:14:31.634 "name": null, 00:14:31.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.634 "is_configured": false, 00:14:31.634 "data_offset": 0, 00:14:31.634 "data_size": 63488 00:14:31.634 }, 00:14:31.634 { 00:14:31.634 "name": "BaseBdev3", 00:14:31.634 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:31.634 "is_configured": true, 00:14:31.634 "data_offset": 2048, 00:14:31.634 "data_size": 63488 00:14:31.634 }, 00:14:31.634 { 00:14:31.634 "name": "BaseBdev4", 00:14:31.634 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:31.634 "is_configured": true, 00:14:31.634 "data_offset": 2048, 00:14:31.634 "data_size": 63488 00:14:31.634 } 00:14:31.634 ] 00:14:31.634 }' 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.634 10:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.892 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.892 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.892 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.892 [2024-11-15 10:42:53.047361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.892 [2024-11-15 10:42:53.047406] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.892 [2024-11-15 10:42:53.047530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.892 [2024-11-15 10:42:53.047651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.892 [2024-11-15 10:42:53.047679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.151 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:32.410 /dev/nbd0 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.410 1+0 records in 00:14:32.410 1+0 records out 00:14:32.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352129 s, 11.6 MB/s 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:32.410 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.411 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:32.668 /dev/nbd1 00:14:32.668 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.669 1+0 records in 00:14:32.669 1+0 records out 00:14:32.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384458 s, 10.7 MB/s 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.669 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:32.926 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:32.926 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.926 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:32.926 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:32.926 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:32.926 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.926 10:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:33.185 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:33.185 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:33.185 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:33.185 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.185 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.185 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:33.185 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:33.185 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.185 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.185 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.443 [2024-11-15 10:42:54.537264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:33.443 [2024-11-15 10:42:54.537331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.443 [2024-11-15 10:42:54.537365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:33.443 [2024-11-15 10:42:54.537382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.443 [2024-11-15 10:42:54.540287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.443 [2024-11-15 10:42:54.540330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:33.443 [2024-11-15 10:42:54.540454] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:33.443 [2024-11-15 10:42:54.540551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.443 [2024-11-15 10:42:54.540754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.443 [2024-11-15 10:42:54.540913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:33.443 spare 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.443 10:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.722 [2024-11-15 10:42:54.641047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:33.722 [2024-11-15 10:42:54.641108] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:33.722 [2024-11-15 10:42:54.641570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:33.723 [2024-11-15 10:42:54.641846] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:33.723 [2024-11-15 10:42:54.641880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:33.723 [2024-11-15 10:42:54.642121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.723 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.723 "name": "raid_bdev1", 00:14:33.723 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:33.723 "strip_size_kb": 0, 00:14:33.723 "state": "online", 00:14:33.723 "raid_level": "raid1", 00:14:33.723 "superblock": true, 00:14:33.723 "num_base_bdevs": 4, 00:14:33.723 "num_base_bdevs_discovered": 3, 00:14:33.723 "num_base_bdevs_operational": 3, 00:14:33.723 "base_bdevs_list": [ 00:14:33.723 { 00:14:33.723 "name": "spare", 00:14:33.723 "uuid": "f6ac45cf-3e4b-5c22-b08e-ca6dbeb6488e", 00:14:33.723 "is_configured": true, 00:14:33.723 "data_offset": 2048, 00:14:33.724 "data_size": 63488 00:14:33.724 }, 00:14:33.724 { 00:14:33.724 "name": null, 00:14:33.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.724 "is_configured": false, 00:14:33.724 "data_offset": 2048, 00:14:33.724 "data_size": 63488 00:14:33.724 }, 00:14:33.724 { 00:14:33.724 "name": "BaseBdev3", 00:14:33.724 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:33.724 "is_configured": true, 00:14:33.724 "data_offset": 2048, 00:14:33.724 "data_size": 63488 00:14:33.724 }, 00:14:33.724 { 00:14:33.724 "name": "BaseBdev4", 00:14:33.724 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:33.724 "is_configured": true, 00:14:33.724 "data_offset": 2048, 00:14:33.724 "data_size": 63488 00:14:33.724 } 00:14:33.724 ] 00:14:33.724 }' 00:14:33.724 10:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.724 10:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.990 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.990 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.991 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.991 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.991 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.991 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.991 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.991 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.991 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.249 "name": "raid_bdev1", 00:14:34.249 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:34.249 "strip_size_kb": 0, 00:14:34.249 "state": "online", 00:14:34.249 "raid_level": "raid1", 00:14:34.249 "superblock": true, 00:14:34.249 "num_base_bdevs": 4, 00:14:34.249 "num_base_bdevs_discovered": 3, 00:14:34.249 "num_base_bdevs_operational": 3, 00:14:34.249 "base_bdevs_list": [ 00:14:34.249 { 00:14:34.249 "name": "spare", 00:14:34.249 "uuid": "f6ac45cf-3e4b-5c22-b08e-ca6dbeb6488e", 00:14:34.249 "is_configured": true, 00:14:34.249 "data_offset": 2048, 00:14:34.249 "data_size": 63488 00:14:34.249 }, 00:14:34.249 { 00:14:34.249 "name": null, 00:14:34.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.249 "is_configured": false, 00:14:34.249 "data_offset": 2048, 00:14:34.249 "data_size": 63488 00:14:34.249 }, 00:14:34.249 { 00:14:34.249 "name": "BaseBdev3", 00:14:34.249 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:34.249 "is_configured": true, 00:14:34.249 "data_offset": 2048, 00:14:34.249 "data_size": 63488 00:14:34.249 }, 00:14:34.249 { 00:14:34.249 "name": "BaseBdev4", 00:14:34.249 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:34.249 "is_configured": true, 00:14:34.249 "data_offset": 2048, 00:14:34.249 "data_size": 63488 00:14:34.249 } 00:14:34.249 ] 00:14:34.249 }' 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.249 [2024-11-15 10:42:55.338251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.249 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.250 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.250 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.250 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.250 "name": "raid_bdev1", 00:14:34.250 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:34.250 "strip_size_kb": 0, 00:14:34.250 "state": "online", 00:14:34.250 "raid_level": "raid1", 00:14:34.250 "superblock": true, 00:14:34.250 "num_base_bdevs": 4, 00:14:34.250 "num_base_bdevs_discovered": 2, 00:14:34.250 "num_base_bdevs_operational": 2, 00:14:34.250 "base_bdevs_list": [ 00:14:34.250 { 00:14:34.250 "name": null, 00:14:34.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.250 "is_configured": false, 00:14:34.250 "data_offset": 0, 00:14:34.250 "data_size": 63488 00:14:34.250 }, 00:14:34.250 { 00:14:34.250 "name": null, 00:14:34.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.250 "is_configured": false, 00:14:34.250 "data_offset": 2048, 00:14:34.250 "data_size": 63488 00:14:34.250 }, 00:14:34.250 { 00:14:34.250 "name": "BaseBdev3", 00:14:34.250 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:34.250 "is_configured": true, 00:14:34.250 "data_offset": 2048, 00:14:34.250 "data_size": 63488 00:14:34.250 }, 00:14:34.250 { 00:14:34.250 "name": "BaseBdev4", 00:14:34.250 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:34.250 "is_configured": true, 00:14:34.250 "data_offset": 2048, 00:14:34.250 "data_size": 63488 00:14:34.250 } 00:14:34.250 ] 00:14:34.250 }' 00:14:34.250 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.250 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.816 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:34.816 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.816 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.816 [2024-11-15 10:42:55.850431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.816 [2024-11-15 10:42:55.850705] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:34.816 [2024-11-15 10:42:55.850737] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:34.816 [2024-11-15 10:42:55.850789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.816 [2024-11-15 10:42:55.863988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:34.816 10:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.816 10:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:34.816 [2024-11-15 10:42:55.866538] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:35.750 10:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.750 10:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.750 10:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.750 10:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.750 10:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.750 10:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.750 10:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.750 10:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.750 10:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.750 10:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.009 10:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.009 "name": "raid_bdev1", 00:14:36.009 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:36.009 "strip_size_kb": 0, 00:14:36.009 "state": "online", 00:14:36.009 "raid_level": "raid1", 00:14:36.009 "superblock": true, 00:14:36.009 "num_base_bdevs": 4, 00:14:36.009 "num_base_bdevs_discovered": 3, 00:14:36.009 "num_base_bdevs_operational": 3, 00:14:36.009 "process": { 00:14:36.009 "type": "rebuild", 00:14:36.009 "target": "spare", 00:14:36.009 "progress": { 00:14:36.009 "blocks": 20480, 00:14:36.009 "percent": 32 00:14:36.009 } 00:14:36.009 }, 00:14:36.009 "base_bdevs_list": [ 00:14:36.009 { 00:14:36.009 "name": "spare", 00:14:36.009 "uuid": "f6ac45cf-3e4b-5c22-b08e-ca6dbeb6488e", 00:14:36.009 "is_configured": true, 00:14:36.009 "data_offset": 2048, 00:14:36.009 "data_size": 63488 00:14:36.009 }, 00:14:36.009 { 00:14:36.009 "name": null, 00:14:36.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.009 "is_configured": false, 00:14:36.009 "data_offset": 2048, 00:14:36.009 "data_size": 63488 00:14:36.009 }, 00:14:36.009 { 00:14:36.009 "name": "BaseBdev3", 00:14:36.009 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:36.009 "is_configured": true, 00:14:36.009 "data_offset": 2048, 00:14:36.009 "data_size": 63488 00:14:36.009 }, 00:14:36.009 { 00:14:36.009 "name": "BaseBdev4", 00:14:36.009 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:36.009 "is_configured": true, 00:14:36.009 "data_offset": 2048, 00:14:36.009 "data_size": 63488 00:14:36.009 } 00:14:36.009 ] 00:14:36.009 }' 00:14:36.009 10:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.009 10:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.009 10:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.009 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.009 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:36.009 10:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.009 10:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.009 [2024-11-15 10:42:57.023689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:36.009 [2024-11-15 10:42:57.075615] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:36.009 [2024-11-15 10:42:57.075694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.009 [2024-11-15 10:42:57.075722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:36.009 [2024-11-15 10:42:57.075734] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:36.009 10:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.009 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:36.009 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.010 "name": "raid_bdev1", 00:14:36.010 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:36.010 "strip_size_kb": 0, 00:14:36.010 "state": "online", 00:14:36.010 "raid_level": "raid1", 00:14:36.010 "superblock": true, 00:14:36.010 "num_base_bdevs": 4, 00:14:36.010 "num_base_bdevs_discovered": 2, 00:14:36.010 "num_base_bdevs_operational": 2, 00:14:36.010 "base_bdevs_list": [ 00:14:36.010 { 00:14:36.010 "name": null, 00:14:36.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.010 "is_configured": false, 00:14:36.010 "data_offset": 0, 00:14:36.010 "data_size": 63488 00:14:36.010 }, 00:14:36.010 { 00:14:36.010 "name": null, 00:14:36.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.010 "is_configured": false, 00:14:36.010 "data_offset": 2048, 00:14:36.010 "data_size": 63488 00:14:36.010 }, 00:14:36.010 { 00:14:36.010 "name": "BaseBdev3", 00:14:36.010 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:36.010 "is_configured": true, 00:14:36.010 "data_offset": 2048, 00:14:36.010 "data_size": 63488 00:14:36.010 }, 00:14:36.010 { 00:14:36.010 "name": "BaseBdev4", 00:14:36.010 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:36.010 "is_configured": true, 00:14:36.010 "data_offset": 2048, 00:14:36.010 "data_size": 63488 00:14:36.010 } 00:14:36.010 ] 00:14:36.010 }' 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.010 10:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.576 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:36.576 10:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.576 10:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.576 [2024-11-15 10:42:57.623632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:36.576 [2024-11-15 10:42:57.623709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.576 [2024-11-15 10:42:57.623757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:36.576 [2024-11-15 10:42:57.623774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.576 [2024-11-15 10:42:57.624391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.576 [2024-11-15 10:42:57.624432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:36.576 [2024-11-15 10:42:57.624571] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:36.576 [2024-11-15 10:42:57.624592] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:36.576 [2024-11-15 10:42:57.624612] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:36.576 [2024-11-15 10:42:57.624653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.576 [2024-11-15 10:42:57.638423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:36.576 spare 00:14:36.576 10:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.576 10:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:36.576 [2024-11-15 10:42:57.640976] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:37.513 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.513 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.513 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.513 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.513 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.513 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.513 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.513 10:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.513 10:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.513 10:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.772 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.772 "name": "raid_bdev1", 00:14:37.772 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:37.772 "strip_size_kb": 0, 00:14:37.772 "state": "online", 00:14:37.772 "raid_level": "raid1", 00:14:37.772 "superblock": true, 00:14:37.772 "num_base_bdevs": 4, 00:14:37.772 "num_base_bdevs_discovered": 3, 00:14:37.772 "num_base_bdevs_operational": 3, 00:14:37.772 "process": { 00:14:37.772 "type": "rebuild", 00:14:37.772 "target": "spare", 00:14:37.772 "progress": { 00:14:37.772 "blocks": 20480, 00:14:37.772 "percent": 32 00:14:37.772 } 00:14:37.772 }, 00:14:37.772 "base_bdevs_list": [ 00:14:37.772 { 00:14:37.772 "name": "spare", 00:14:37.772 "uuid": "f6ac45cf-3e4b-5c22-b08e-ca6dbeb6488e", 00:14:37.772 "is_configured": true, 00:14:37.772 "data_offset": 2048, 00:14:37.772 "data_size": 63488 00:14:37.772 }, 00:14:37.772 { 00:14:37.772 "name": null, 00:14:37.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.772 "is_configured": false, 00:14:37.772 "data_offset": 2048, 00:14:37.772 "data_size": 63488 00:14:37.772 }, 00:14:37.772 { 00:14:37.772 "name": "BaseBdev3", 00:14:37.772 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:37.772 "is_configured": true, 00:14:37.772 "data_offset": 2048, 00:14:37.772 "data_size": 63488 00:14:37.772 }, 00:14:37.772 { 00:14:37.772 "name": "BaseBdev4", 00:14:37.772 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:37.772 "is_configured": true, 00:14:37.772 "data_offset": 2048, 00:14:37.772 "data_size": 63488 00:14:37.772 } 00:14:37.772 ] 00:14:37.772 }' 00:14:37.772 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.772 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.772 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.772 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.772 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:37.772 10:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.772 10:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.772 [2024-11-15 10:42:58.814411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.772 [2024-11-15 10:42:58.849654] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:37.772 [2024-11-15 10:42:58.849730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.772 [2024-11-15 10:42:58.849755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.772 [2024-11-15 10:42:58.849771] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:37.772 10:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.772 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:37.772 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.773 "name": "raid_bdev1", 00:14:37.773 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:37.773 "strip_size_kb": 0, 00:14:37.773 "state": "online", 00:14:37.773 "raid_level": "raid1", 00:14:37.773 "superblock": true, 00:14:37.773 "num_base_bdevs": 4, 00:14:37.773 "num_base_bdevs_discovered": 2, 00:14:37.773 "num_base_bdevs_operational": 2, 00:14:37.773 "base_bdevs_list": [ 00:14:37.773 { 00:14:37.773 "name": null, 00:14:37.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.773 "is_configured": false, 00:14:37.773 "data_offset": 0, 00:14:37.773 "data_size": 63488 00:14:37.773 }, 00:14:37.773 { 00:14:37.773 "name": null, 00:14:37.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.773 "is_configured": false, 00:14:37.773 "data_offset": 2048, 00:14:37.773 "data_size": 63488 00:14:37.773 }, 00:14:37.773 { 00:14:37.773 "name": "BaseBdev3", 00:14:37.773 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:37.773 "is_configured": true, 00:14:37.773 "data_offset": 2048, 00:14:37.773 "data_size": 63488 00:14:37.773 }, 00:14:37.773 { 00:14:37.773 "name": "BaseBdev4", 00:14:37.773 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:37.773 "is_configured": true, 00:14:37.773 "data_offset": 2048, 00:14:37.773 "data_size": 63488 00:14:37.773 } 00:14:37.773 ] 00:14:37.773 }' 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.773 10:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.339 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.339 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.339 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.339 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.339 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.339 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.339 10:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.339 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.339 10:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.339 10:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.339 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.339 "name": "raid_bdev1", 00:14:38.339 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:38.339 "strip_size_kb": 0, 00:14:38.339 "state": "online", 00:14:38.339 "raid_level": "raid1", 00:14:38.339 "superblock": true, 00:14:38.339 "num_base_bdevs": 4, 00:14:38.339 "num_base_bdevs_discovered": 2, 00:14:38.339 "num_base_bdevs_operational": 2, 00:14:38.339 "base_bdevs_list": [ 00:14:38.339 { 00:14:38.339 "name": null, 00:14:38.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.339 "is_configured": false, 00:14:38.339 "data_offset": 0, 00:14:38.339 "data_size": 63488 00:14:38.339 }, 00:14:38.339 { 00:14:38.339 "name": null, 00:14:38.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.339 "is_configured": false, 00:14:38.339 "data_offset": 2048, 00:14:38.339 "data_size": 63488 00:14:38.339 }, 00:14:38.339 { 00:14:38.339 "name": "BaseBdev3", 00:14:38.339 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:38.339 "is_configured": true, 00:14:38.339 "data_offset": 2048, 00:14:38.339 "data_size": 63488 00:14:38.339 }, 00:14:38.339 { 00:14:38.339 "name": "BaseBdev4", 00:14:38.339 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:38.339 "is_configured": true, 00:14:38.339 "data_offset": 2048, 00:14:38.339 "data_size": 63488 00:14:38.339 } 00:14:38.339 ] 00:14:38.339 }' 00:14:38.339 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.339 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.339 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.598 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.598 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:38.598 10:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.598 10:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.598 10:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.598 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:38.598 10:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.598 10:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.598 [2024-11-15 10:42:59.517986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:38.598 [2024-11-15 10:42:59.518053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.598 [2024-11-15 10:42:59.518082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:38.598 [2024-11-15 10:42:59.518100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.598 [2024-11-15 10:42:59.518671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.598 [2024-11-15 10:42:59.518718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:38.598 [2024-11-15 10:42:59.518824] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:38.598 [2024-11-15 10:42:59.518854] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:38.598 [2024-11-15 10:42:59.518866] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:38.598 [2024-11-15 10:42:59.518897] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:38.598 BaseBdev1 00:14:38.598 10:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.598 10:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.534 "name": "raid_bdev1", 00:14:39.534 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:39.534 "strip_size_kb": 0, 00:14:39.534 "state": "online", 00:14:39.534 "raid_level": "raid1", 00:14:39.534 "superblock": true, 00:14:39.534 "num_base_bdevs": 4, 00:14:39.534 "num_base_bdevs_discovered": 2, 00:14:39.534 "num_base_bdevs_operational": 2, 00:14:39.534 "base_bdevs_list": [ 00:14:39.534 { 00:14:39.534 "name": null, 00:14:39.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.534 "is_configured": false, 00:14:39.534 "data_offset": 0, 00:14:39.534 "data_size": 63488 00:14:39.534 }, 00:14:39.534 { 00:14:39.534 "name": null, 00:14:39.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.534 "is_configured": false, 00:14:39.534 "data_offset": 2048, 00:14:39.534 "data_size": 63488 00:14:39.534 }, 00:14:39.534 { 00:14:39.534 "name": "BaseBdev3", 00:14:39.534 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:39.534 "is_configured": true, 00:14:39.534 "data_offset": 2048, 00:14:39.534 "data_size": 63488 00:14:39.534 }, 00:14:39.534 { 00:14:39.534 "name": "BaseBdev4", 00:14:39.534 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:39.534 "is_configured": true, 00:14:39.534 "data_offset": 2048, 00:14:39.534 "data_size": 63488 00:14:39.534 } 00:14:39.534 ] 00:14:39.534 }' 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.534 10:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.101 10:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.101 10:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.101 10:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.101 10:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.101 10:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.101 10:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.101 10:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.101 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.101 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.101 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.101 10:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.101 "name": "raid_bdev1", 00:14:40.101 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:40.101 "strip_size_kb": 0, 00:14:40.101 "state": "online", 00:14:40.101 "raid_level": "raid1", 00:14:40.101 "superblock": true, 00:14:40.101 "num_base_bdevs": 4, 00:14:40.101 "num_base_bdevs_discovered": 2, 00:14:40.101 "num_base_bdevs_operational": 2, 00:14:40.101 "base_bdevs_list": [ 00:14:40.101 { 00:14:40.101 "name": null, 00:14:40.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.101 "is_configured": false, 00:14:40.101 "data_offset": 0, 00:14:40.102 "data_size": 63488 00:14:40.102 }, 00:14:40.102 { 00:14:40.102 "name": null, 00:14:40.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.102 "is_configured": false, 00:14:40.102 "data_offset": 2048, 00:14:40.102 "data_size": 63488 00:14:40.102 }, 00:14:40.102 { 00:14:40.102 "name": "BaseBdev3", 00:14:40.102 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:40.102 "is_configured": true, 00:14:40.102 "data_offset": 2048, 00:14:40.102 "data_size": 63488 00:14:40.102 }, 00:14:40.102 { 00:14:40.102 "name": "BaseBdev4", 00:14:40.102 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:40.102 "is_configured": true, 00:14:40.102 "data_offset": 2048, 00:14:40.102 "data_size": 63488 00:14:40.102 } 00:14:40.102 ] 00:14:40.102 }' 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.102 [2024-11-15 10:43:01.194504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.102 [2024-11-15 10:43:01.194738] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:40.102 [2024-11-15 10:43:01.194775] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:40.102 request: 00:14:40.102 { 00:14:40.102 "base_bdev": "BaseBdev1", 00:14:40.102 "raid_bdev": "raid_bdev1", 00:14:40.102 "method": "bdev_raid_add_base_bdev", 00:14:40.102 "req_id": 1 00:14:40.102 } 00:14:40.102 Got JSON-RPC error response 00:14:40.102 response: 00:14:40.102 { 00:14:40.102 "code": -22, 00:14:40.102 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:40.102 } 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:40.102 10:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.474 "name": "raid_bdev1", 00:14:41.474 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:41.474 "strip_size_kb": 0, 00:14:41.474 "state": "online", 00:14:41.474 "raid_level": "raid1", 00:14:41.474 "superblock": true, 00:14:41.474 "num_base_bdevs": 4, 00:14:41.474 "num_base_bdevs_discovered": 2, 00:14:41.474 "num_base_bdevs_operational": 2, 00:14:41.474 "base_bdevs_list": [ 00:14:41.474 { 00:14:41.474 "name": null, 00:14:41.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.474 "is_configured": false, 00:14:41.474 "data_offset": 0, 00:14:41.474 "data_size": 63488 00:14:41.474 }, 00:14:41.474 { 00:14:41.474 "name": null, 00:14:41.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.474 "is_configured": false, 00:14:41.474 "data_offset": 2048, 00:14:41.474 "data_size": 63488 00:14:41.474 }, 00:14:41.474 { 00:14:41.474 "name": "BaseBdev3", 00:14:41.474 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:41.474 "is_configured": true, 00:14:41.474 "data_offset": 2048, 00:14:41.474 "data_size": 63488 00:14:41.474 }, 00:14:41.474 { 00:14:41.474 "name": "BaseBdev4", 00:14:41.474 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:41.474 "is_configured": true, 00:14:41.474 "data_offset": 2048, 00:14:41.474 "data_size": 63488 00:14:41.474 } 00:14:41.474 ] 00:14:41.474 }' 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.474 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.732 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:41.732 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.732 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:41.732 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:41.732 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.732 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.732 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.732 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.732 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.732 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.732 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.732 "name": "raid_bdev1", 00:14:41.732 "uuid": "1556e8ba-8474-44be-811f-b5d5c443cd67", 00:14:41.732 "strip_size_kb": 0, 00:14:41.732 "state": "online", 00:14:41.732 "raid_level": "raid1", 00:14:41.733 "superblock": true, 00:14:41.733 "num_base_bdevs": 4, 00:14:41.733 "num_base_bdevs_discovered": 2, 00:14:41.733 "num_base_bdevs_operational": 2, 00:14:41.733 "base_bdevs_list": [ 00:14:41.733 { 00:14:41.733 "name": null, 00:14:41.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.733 "is_configured": false, 00:14:41.733 "data_offset": 0, 00:14:41.733 "data_size": 63488 00:14:41.733 }, 00:14:41.733 { 00:14:41.733 "name": null, 00:14:41.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.733 "is_configured": false, 00:14:41.733 "data_offset": 2048, 00:14:41.733 "data_size": 63488 00:14:41.733 }, 00:14:41.733 { 00:14:41.733 "name": "BaseBdev3", 00:14:41.733 "uuid": "8f6034f9-0795-5abb-90e4-846763c20022", 00:14:41.733 "is_configured": true, 00:14:41.733 "data_offset": 2048, 00:14:41.733 "data_size": 63488 00:14:41.733 }, 00:14:41.733 { 00:14:41.733 "name": "BaseBdev4", 00:14:41.733 "uuid": "efbf2c70-ec39-57a4-b39c-1d8199d0c39e", 00:14:41.733 "is_configured": true, 00:14:41.733 "data_offset": 2048, 00:14:41.733 "data_size": 63488 00:14:41.733 } 00:14:41.733 ] 00:14:41.733 }' 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78223 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78223 ']' 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78223 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78223 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.733 killing process with pid 78223 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78223' 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78223 00:14:41.733 Received shutdown signal, test time was about 60.000000 seconds 00:14:41.733 00:14:41.733 Latency(us) 00:14:41.733 [2024-11-15T10:43:02.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.733 [2024-11-15T10:43:02.895Z] =================================================================================================================== 00:14:41.733 [2024-11-15T10:43:02.895Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:41.733 [2024-11-15 10:43:02.879710] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.733 10:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78223 00:14:41.733 [2024-11-15 10:43:02.879867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.733 [2024-11-15 10:43:02.879959] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.733 [2024-11-15 10:43:02.879975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:42.299 [2024-11-15 10:43:03.315800] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.232 10:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:43.232 00:14:43.232 real 0m29.205s 00:14:43.232 user 0m35.092s 00:14:43.232 sys 0m4.082s 00:14:43.232 10:43:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.232 10:43:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.232 ************************************ 00:14:43.232 END TEST raid_rebuild_test_sb 00:14:43.232 ************************************ 00:14:43.232 10:43:04 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:43.232 10:43:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:43.232 10:43:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.232 10:43:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.490 ************************************ 00:14:43.490 START TEST raid_rebuild_test_io 00:14:43.490 ************************************ 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79016 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79016 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79016 ']' 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.491 10:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.491 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:43.491 Zero copy mechanism will not be used. 00:14:43.491 [2024-11-15 10:43:04.571010] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:14:43.491 [2024-11-15 10:43:04.571176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79016 ] 00:14:43.753 [2024-11-15 10:43:04.753797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.753 [2024-11-15 10:43:04.885835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.012 [2024-11-15 10:43:05.088640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.012 [2024-11-15 10:43:05.088700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.580 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.580 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:44.580 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.581 BaseBdev1_malloc 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.581 [2024-11-15 10:43:05.529032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:44.581 [2024-11-15 10:43:05.529108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.581 [2024-11-15 10:43:05.529141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:44.581 [2024-11-15 10:43:05.529161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.581 [2024-11-15 10:43:05.531859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.581 [2024-11-15 10:43:05.531909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.581 BaseBdev1 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.581 BaseBdev2_malloc 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.581 [2024-11-15 10:43:05.580932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:44.581 [2024-11-15 10:43:05.581003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.581 [2024-11-15 10:43:05.581031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:44.581 [2024-11-15 10:43:05.581051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.581 [2024-11-15 10:43:05.583713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.581 [2024-11-15 10:43:05.583761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.581 BaseBdev2 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.581 BaseBdev3_malloc 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.581 [2024-11-15 10:43:05.647223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:44.581 [2024-11-15 10:43:05.647288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.581 [2024-11-15 10:43:05.647319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:44.581 [2024-11-15 10:43:05.647338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.581 [2024-11-15 10:43:05.650096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.581 [2024-11-15 10:43:05.650144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:44.581 BaseBdev3 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.581 BaseBdev4_malloc 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.581 [2024-11-15 10:43:05.699372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:44.581 [2024-11-15 10:43:05.699435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.581 [2024-11-15 10:43:05.699462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:44.581 [2024-11-15 10:43:05.699480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.581 [2024-11-15 10:43:05.702183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.581 [2024-11-15 10:43:05.702232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:44.581 BaseBdev4 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.581 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.843 spare_malloc 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.843 spare_delay 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.843 [2024-11-15 10:43:05.763724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:44.843 [2024-11-15 10:43:05.763794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.843 [2024-11-15 10:43:05.763824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:44.843 [2024-11-15 10:43:05.763842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.843 [2024-11-15 10:43:05.766671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.843 [2024-11-15 10:43:05.766722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:44.843 spare 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.843 [2024-11-15 10:43:05.771774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.843 [2024-11-15 10:43:05.774173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.843 [2024-11-15 10:43:05.774273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.843 [2024-11-15 10:43:05.774359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:44.843 [2024-11-15 10:43:05.774470] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:44.843 [2024-11-15 10:43:05.774513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:44.843 [2024-11-15 10:43:05.774842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:44.843 [2024-11-15 10:43:05.775076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:44.843 [2024-11-15 10:43:05.775106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:44.843 [2024-11-15 10:43:05.775293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.843 "name": "raid_bdev1", 00:14:44.843 "uuid": "656c3e9a-cc73-45f4-9409-1f20115324d4", 00:14:44.843 "strip_size_kb": 0, 00:14:44.843 "state": "online", 00:14:44.843 "raid_level": "raid1", 00:14:44.843 "superblock": false, 00:14:44.843 "num_base_bdevs": 4, 00:14:44.843 "num_base_bdevs_discovered": 4, 00:14:44.843 "num_base_bdevs_operational": 4, 00:14:44.843 "base_bdevs_list": [ 00:14:44.843 { 00:14:44.843 "name": "BaseBdev1", 00:14:44.843 "uuid": "bd9cb119-1a21-5bbb-8bd2-d035a2fcfa86", 00:14:44.843 "is_configured": true, 00:14:44.843 "data_offset": 0, 00:14:44.843 "data_size": 65536 00:14:44.843 }, 00:14:44.843 { 00:14:44.843 "name": "BaseBdev2", 00:14:44.843 "uuid": "1e35fcdd-8188-570f-81c4-e9d75138ed85", 00:14:44.843 "is_configured": true, 00:14:44.843 "data_offset": 0, 00:14:44.843 "data_size": 65536 00:14:44.843 }, 00:14:44.843 { 00:14:44.843 "name": "BaseBdev3", 00:14:44.843 "uuid": "2c08bc1a-acad-5075-96a1-421c7e536a00", 00:14:44.843 "is_configured": true, 00:14:44.843 "data_offset": 0, 00:14:44.843 "data_size": 65536 00:14:44.843 }, 00:14:44.843 { 00:14:44.843 "name": "BaseBdev4", 00:14:44.843 "uuid": "dc749a7c-0e6e-574a-bf77-a6b3414e04fa", 00:14:44.843 "is_configured": true, 00:14:44.843 "data_offset": 0, 00:14:44.843 "data_size": 65536 00:14:44.843 } 00:14:44.843 ] 00:14:44.843 }' 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.843 10:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.411 [2024-11-15 10:43:06.304354] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.411 [2024-11-15 10:43:06.407911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.411 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.412 "name": "raid_bdev1", 00:14:45.412 "uuid": "656c3e9a-cc73-45f4-9409-1f20115324d4", 00:14:45.412 "strip_size_kb": 0, 00:14:45.412 "state": "online", 00:14:45.412 "raid_level": "raid1", 00:14:45.412 "superblock": false, 00:14:45.412 "num_base_bdevs": 4, 00:14:45.412 "num_base_bdevs_discovered": 3, 00:14:45.412 "num_base_bdevs_operational": 3, 00:14:45.412 "base_bdevs_list": [ 00:14:45.412 { 00:14:45.412 "name": null, 00:14:45.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.412 "is_configured": false, 00:14:45.412 "data_offset": 0, 00:14:45.412 "data_size": 65536 00:14:45.412 }, 00:14:45.412 { 00:14:45.412 "name": "BaseBdev2", 00:14:45.412 "uuid": "1e35fcdd-8188-570f-81c4-e9d75138ed85", 00:14:45.412 "is_configured": true, 00:14:45.412 "data_offset": 0, 00:14:45.412 "data_size": 65536 00:14:45.412 }, 00:14:45.412 { 00:14:45.412 "name": "BaseBdev3", 00:14:45.412 "uuid": "2c08bc1a-acad-5075-96a1-421c7e536a00", 00:14:45.412 "is_configured": true, 00:14:45.412 "data_offset": 0, 00:14:45.412 "data_size": 65536 00:14:45.412 }, 00:14:45.412 { 00:14:45.412 "name": "BaseBdev4", 00:14:45.412 "uuid": "dc749a7c-0e6e-574a-bf77-a6b3414e04fa", 00:14:45.412 "is_configured": true, 00:14:45.412 "data_offset": 0, 00:14:45.412 "data_size": 65536 00:14:45.412 } 00:14:45.412 ] 00:14:45.412 }' 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.412 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.412 [2024-11-15 10:43:06.543955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:45.412 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:45.412 Zero copy mechanism will not be used. 00:14:45.412 Running I/O for 60 seconds... 00:14:45.978 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:45.978 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.978 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.978 [2024-11-15 10:43:06.947802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.978 10:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.978 10:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:45.978 [2024-11-15 10:43:07.023585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:45.978 [2024-11-15 10:43:07.026161] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.978 [2024-11-15 10:43:07.135706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:46.236 [2024-11-15 10:43:07.137313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:46.236 [2024-11-15 10:43:07.379789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:46.236 [2024-11-15 10:43:07.380170] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:46.754 133.00 IOPS, 399.00 MiB/s [2024-11-15T10:43:07.916Z] [2024-11-15 10:43:07.726831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:47.012 [2024-11-15 10:43:07.975051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:47.012 10:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.012 10:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.012 10:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.012 10:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.012 10:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.012 10:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.012 10:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.012 10:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.012 10:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.012 10:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.012 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.012 "name": "raid_bdev1", 00:14:47.012 "uuid": "656c3e9a-cc73-45f4-9409-1f20115324d4", 00:14:47.012 "strip_size_kb": 0, 00:14:47.012 "state": "online", 00:14:47.012 "raid_level": "raid1", 00:14:47.012 "superblock": false, 00:14:47.012 "num_base_bdevs": 4, 00:14:47.012 "num_base_bdevs_discovered": 4, 00:14:47.012 "num_base_bdevs_operational": 4, 00:14:47.012 "process": { 00:14:47.012 "type": "rebuild", 00:14:47.012 "target": "spare", 00:14:47.012 "progress": { 00:14:47.012 "blocks": 10240, 00:14:47.012 "percent": 15 00:14:47.013 } 00:14:47.013 }, 00:14:47.013 "base_bdevs_list": [ 00:14:47.013 { 00:14:47.013 "name": "spare", 00:14:47.013 "uuid": "fe398332-fb85-5feb-972b-79ad68583daa", 00:14:47.013 "is_configured": true, 00:14:47.013 "data_offset": 0, 00:14:47.013 "data_size": 65536 00:14:47.013 }, 00:14:47.013 { 00:14:47.013 "name": "BaseBdev2", 00:14:47.013 "uuid": "1e35fcdd-8188-570f-81c4-e9d75138ed85", 00:14:47.013 "is_configured": true, 00:14:47.013 "data_offset": 0, 00:14:47.013 "data_size": 65536 00:14:47.013 }, 00:14:47.013 { 00:14:47.013 "name": "BaseBdev3", 00:14:47.013 "uuid": "2c08bc1a-acad-5075-96a1-421c7e536a00", 00:14:47.013 "is_configured": true, 00:14:47.013 "data_offset": 0, 00:14:47.013 "data_size": 65536 00:14:47.013 }, 00:14:47.013 { 00:14:47.013 "name": "BaseBdev4", 00:14:47.013 "uuid": "dc749a7c-0e6e-574a-bf77-a6b3414e04fa", 00:14:47.013 "is_configured": true, 00:14:47.013 "data_offset": 0, 00:14:47.013 "data_size": 65536 00:14:47.013 } 00:14:47.013 ] 00:14:47.013 }' 00:14:47.013 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.013 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.013 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.013 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.013 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:47.013 10:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.013 10:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.013 [2024-11-15 10:43:08.157025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.271 [2024-11-15 10:43:08.322301] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:47.271 [2024-11-15 10:43:08.344475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.271 [2024-11-15 10:43:08.344564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.271 [2024-11-15 10:43:08.344595] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:47.271 [2024-11-15 10:43:08.375419] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.271 10:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.530 10:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.530 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.530 "name": "raid_bdev1", 00:14:47.530 "uuid": "656c3e9a-cc73-45f4-9409-1f20115324d4", 00:14:47.530 "strip_size_kb": 0, 00:14:47.530 "state": "online", 00:14:47.530 "raid_level": "raid1", 00:14:47.530 "superblock": false, 00:14:47.530 "num_base_bdevs": 4, 00:14:47.530 "num_base_bdevs_discovered": 3, 00:14:47.530 "num_base_bdevs_operational": 3, 00:14:47.530 "base_bdevs_list": [ 00:14:47.530 { 00:14:47.530 "name": null, 00:14:47.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.530 "is_configured": false, 00:14:47.530 "data_offset": 0, 00:14:47.530 "data_size": 65536 00:14:47.530 }, 00:14:47.530 { 00:14:47.530 "name": "BaseBdev2", 00:14:47.530 "uuid": "1e35fcdd-8188-570f-81c4-e9d75138ed85", 00:14:47.530 "is_configured": true, 00:14:47.530 "data_offset": 0, 00:14:47.530 "data_size": 65536 00:14:47.530 }, 00:14:47.530 { 00:14:47.530 "name": "BaseBdev3", 00:14:47.530 "uuid": "2c08bc1a-acad-5075-96a1-421c7e536a00", 00:14:47.530 "is_configured": true, 00:14:47.530 "data_offset": 0, 00:14:47.530 "data_size": 65536 00:14:47.530 }, 00:14:47.530 { 00:14:47.530 "name": "BaseBdev4", 00:14:47.530 "uuid": "dc749a7c-0e6e-574a-bf77-a6b3414e04fa", 00:14:47.530 "is_configured": true, 00:14:47.530 "data_offset": 0, 00:14:47.530 "data_size": 65536 00:14:47.530 } 00:14:47.530 ] 00:14:47.530 }' 00:14:47.530 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.530 10:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.789 110.00 IOPS, 330.00 MiB/s [2024-11-15T10:43:08.951Z] 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.789 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.789 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.789 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.789 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.789 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.789 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.789 10:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.789 10:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.047 10:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.047 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.047 "name": "raid_bdev1", 00:14:48.047 "uuid": "656c3e9a-cc73-45f4-9409-1f20115324d4", 00:14:48.047 "strip_size_kb": 0, 00:14:48.047 "state": "online", 00:14:48.047 "raid_level": "raid1", 00:14:48.047 "superblock": false, 00:14:48.047 "num_base_bdevs": 4, 00:14:48.047 "num_base_bdevs_discovered": 3, 00:14:48.047 "num_base_bdevs_operational": 3, 00:14:48.047 "base_bdevs_list": [ 00:14:48.047 { 00:14:48.047 "name": null, 00:14:48.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.047 "is_configured": false, 00:14:48.047 "data_offset": 0, 00:14:48.047 "data_size": 65536 00:14:48.047 }, 00:14:48.047 { 00:14:48.047 "name": "BaseBdev2", 00:14:48.047 "uuid": "1e35fcdd-8188-570f-81c4-e9d75138ed85", 00:14:48.047 "is_configured": true, 00:14:48.047 "data_offset": 0, 00:14:48.047 "data_size": 65536 00:14:48.047 }, 00:14:48.047 { 00:14:48.047 "name": "BaseBdev3", 00:14:48.047 "uuid": "2c08bc1a-acad-5075-96a1-421c7e536a00", 00:14:48.047 "is_configured": true, 00:14:48.047 "data_offset": 0, 00:14:48.047 "data_size": 65536 00:14:48.047 }, 00:14:48.047 { 00:14:48.047 "name": "BaseBdev4", 00:14:48.047 "uuid": "dc749a7c-0e6e-574a-bf77-a6b3414e04fa", 00:14:48.047 "is_configured": true, 00:14:48.047 "data_offset": 0, 00:14:48.047 "data_size": 65536 00:14:48.047 } 00:14:48.047 ] 00:14:48.047 }' 00:14:48.047 10:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.047 10:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.047 10:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.047 10:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.047 10:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:48.047 10:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.047 10:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.047 [2024-11-15 10:43:09.079885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.047 10:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.047 10:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:48.047 [2024-11-15 10:43:09.153102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:48.047 [2024-11-15 10:43:09.155682] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:48.305 [2024-11-15 10:43:09.295519] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:48.305 [2024-11-15 10:43:09.426999] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:48.305 [2024-11-15 10:43:09.427282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:48.563 137.67 IOPS, 413.00 MiB/s [2024-11-15T10:43:09.725Z] [2024-11-15 10:43:09.683388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:48.822 [2024-11-15 10:43:09.905160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:48.822 [2024-11-15 10:43:09.906056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:49.080 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.080 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.080 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.080 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.080 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.080 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.080 10:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.080 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.080 10:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.080 10:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.080 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.080 "name": "raid_bdev1", 00:14:49.080 "uuid": "656c3e9a-cc73-45f4-9409-1f20115324d4", 00:14:49.080 "strip_size_kb": 0, 00:14:49.080 "state": "online", 00:14:49.080 "raid_level": "raid1", 00:14:49.080 "superblock": false, 00:14:49.080 "num_base_bdevs": 4, 00:14:49.080 "num_base_bdevs_discovered": 4, 00:14:49.080 "num_base_bdevs_operational": 4, 00:14:49.080 "process": { 00:14:49.080 "type": "rebuild", 00:14:49.080 "target": "spare", 00:14:49.080 "progress": { 00:14:49.080 "blocks": 10240, 00:14:49.080 "percent": 15 00:14:49.080 } 00:14:49.080 }, 00:14:49.080 "base_bdevs_list": [ 00:14:49.080 { 00:14:49.080 "name": "spare", 00:14:49.080 "uuid": "fe398332-fb85-5feb-972b-79ad68583daa", 00:14:49.080 "is_configured": true, 00:14:49.080 "data_offset": 0, 00:14:49.080 "data_size": 65536 00:14:49.080 }, 00:14:49.080 { 00:14:49.081 "name": "BaseBdev2", 00:14:49.081 "uuid": "1e35fcdd-8188-570f-81c4-e9d75138ed85", 00:14:49.081 "is_configured": true, 00:14:49.081 "data_offset": 0, 00:14:49.081 "data_size": 65536 00:14:49.081 }, 00:14:49.081 { 00:14:49.081 "name": "BaseBdev3", 00:14:49.081 "uuid": "2c08bc1a-acad-5075-96a1-421c7e536a00", 00:14:49.081 "is_configured": true, 00:14:49.081 "data_offset": 0, 00:14:49.081 "data_size": 65536 00:14:49.081 }, 00:14:49.081 { 00:14:49.081 "name": "BaseBdev4", 00:14:49.081 "uuid": "dc749a7c-0e6e-574a-bf77-a6b3414e04fa", 00:14:49.081 "is_configured": true, 00:14:49.081 "data_offset": 0, 00:14:49.081 "data_size": 65536 00:14:49.081 } 00:14:49.081 ] 00:14:49.081 }' 00:14:49.081 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.081 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.081 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.343 [2024-11-15 10:43:10.268103] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:49.343 [2024-11-15 10:43:10.268785] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:49.343 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.344 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:49.344 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:49.344 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:49.344 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:49.344 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:49.344 10:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.344 10:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.344 [2024-11-15 10:43:10.291543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:49.344 [2024-11-15 10:43:10.483596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:49.344 [2024-11-15 10:43:10.483960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:49.610 [2024-11-15 10:43:10.500468] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:49.610 [2024-11-15 10:43:10.500521] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:49.610 10:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.610 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:49.610 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:49.610 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.610 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.610 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.610 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.610 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.610 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.610 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.610 10:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.610 10:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.610 [2024-11-15 10:43:10.512349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:49.610 10:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.610 112.75 IOPS, 338.25 MiB/s [2024-11-15T10:43:10.772Z] 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.610 "name": "raid_bdev1", 00:14:49.610 "uuid": "656c3e9a-cc73-45f4-9409-1f20115324d4", 00:14:49.610 "strip_size_kb": 0, 00:14:49.610 "state": "online", 00:14:49.610 "raid_level": "raid1", 00:14:49.610 "superblock": false, 00:14:49.610 "num_base_bdevs": 4, 00:14:49.610 "num_base_bdevs_discovered": 3, 00:14:49.610 "num_base_bdevs_operational": 3, 00:14:49.610 "process": { 00:14:49.610 "type": "rebuild", 00:14:49.610 "target": "spare", 00:14:49.610 "progress": { 00:14:49.610 "blocks": 16384, 00:14:49.610 "percent": 25 00:14:49.610 } 00:14:49.610 }, 00:14:49.610 "base_bdevs_list": [ 00:14:49.610 { 00:14:49.610 "name": "spare", 00:14:49.610 "uuid": "fe398332-fb85-5feb-972b-79ad68583daa", 00:14:49.610 "is_configured": true, 00:14:49.610 "data_offset": 0, 00:14:49.610 "data_size": 65536 00:14:49.610 }, 00:14:49.610 { 00:14:49.610 "name": null, 00:14:49.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.610 "is_configured": false, 00:14:49.610 "data_offset": 0, 00:14:49.610 "data_size": 65536 00:14:49.611 }, 00:14:49.611 { 00:14:49.611 "name": "BaseBdev3", 00:14:49.611 "uuid": "2c08bc1a-acad-5075-96a1-421c7e536a00", 00:14:49.611 "is_configured": true, 00:14:49.611 "data_offset": 0, 00:14:49.611 "data_size": 65536 00:14:49.611 }, 00:14:49.611 { 00:14:49.611 "name": "BaseBdev4", 00:14:49.611 "uuid": "dc749a7c-0e6e-574a-bf77-a6b3414e04fa", 00:14:49.611 "is_configured": true, 00:14:49.611 "data_offset": 0, 00:14:49.611 "data_size": 65536 00:14:49.611 } 00:14:49.611 ] 00:14:49.611 }' 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=519 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.611 "name": "raid_bdev1", 00:14:49.611 "uuid": "656c3e9a-cc73-45f4-9409-1f20115324d4", 00:14:49.611 "strip_size_kb": 0, 00:14:49.611 "state": "online", 00:14:49.611 "raid_level": "raid1", 00:14:49.611 "superblock": false, 00:14:49.611 "num_base_bdevs": 4, 00:14:49.611 "num_base_bdevs_discovered": 3, 00:14:49.611 "num_base_bdevs_operational": 3, 00:14:49.611 "process": { 00:14:49.611 "type": "rebuild", 00:14:49.611 "target": "spare", 00:14:49.611 "progress": { 00:14:49.611 "blocks": 16384, 00:14:49.611 "percent": 25 00:14:49.611 } 00:14:49.611 }, 00:14:49.611 "base_bdevs_list": [ 00:14:49.611 { 00:14:49.611 "name": "spare", 00:14:49.611 "uuid": "fe398332-fb85-5feb-972b-79ad68583daa", 00:14:49.611 "is_configured": true, 00:14:49.611 "data_offset": 0, 00:14:49.611 "data_size": 65536 00:14:49.611 }, 00:14:49.611 { 00:14:49.611 "name": null, 00:14:49.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.611 "is_configured": false, 00:14:49.611 "data_offset": 0, 00:14:49.611 "data_size": 65536 00:14:49.611 }, 00:14:49.611 { 00:14:49.611 "name": "BaseBdev3", 00:14:49.611 "uuid": "2c08bc1a-acad-5075-96a1-421c7e536a00", 00:14:49.611 "is_configured": true, 00:14:49.611 "data_offset": 0, 00:14:49.611 "data_size": 65536 00:14:49.611 }, 00:14:49.611 { 00:14:49.611 "name": "BaseBdev4", 00:14:49.611 "uuid": "dc749a7c-0e6e-574a-bf77-a6b3414e04fa", 00:14:49.611 "is_configured": true, 00:14:49.611 "data_offset": 0, 00:14:49.611 "data_size": 65536 00:14:49.611 } 00:14:49.611 ] 00:14:49.611 }' 00:14:49.611 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.869 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.869 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.869 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.869 10:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.869 [2024-11-15 10:43:10.911592] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:49.869 [2024-11-15 10:43:11.024134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:50.693 100.40 IOPS, 301.20 MiB/s [2024-11-15T10:43:11.855Z] 10:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.693 10:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.693 10:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.693 10:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.693 10:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.693 10:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.693 10:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.693 10:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.693 10:43:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.693 10:43:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.951 10:43:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.951 10:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.951 "name": "raid_bdev1", 00:14:50.951 "uuid": "656c3e9a-cc73-45f4-9409-1f20115324d4", 00:14:50.951 "strip_size_kb": 0, 00:14:50.951 "state": "online", 00:14:50.951 "raid_level": "raid1", 00:14:50.951 "superblock": false, 00:14:50.951 "num_base_bdevs": 4, 00:14:50.951 "num_base_bdevs_discovered": 3, 00:14:50.951 "num_base_bdevs_operational": 3, 00:14:50.951 "process": { 00:14:50.951 "type": "rebuild", 00:14:50.951 "target": "spare", 00:14:50.951 "progress": { 00:14:50.951 "blocks": 34816, 00:14:50.951 "percent": 53 00:14:50.951 } 00:14:50.951 }, 00:14:50.951 "base_bdevs_list": [ 00:14:50.951 { 00:14:50.951 "name": "spare", 00:14:50.951 "uuid": "fe398332-fb85-5feb-972b-79ad68583daa", 00:14:50.951 "is_configured": true, 00:14:50.951 "data_offset": 0, 00:14:50.951 "data_size": 65536 00:14:50.951 }, 00:14:50.951 { 00:14:50.951 "name": null, 00:14:50.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.951 "is_configured": false, 00:14:50.951 "data_offset": 0, 00:14:50.951 "data_size": 65536 00:14:50.951 }, 00:14:50.951 { 00:14:50.951 "name": "BaseBdev3", 00:14:50.951 "uuid": "2c08bc1a-acad-5075-96a1-421c7e536a00", 00:14:50.951 "is_configured": true, 00:14:50.951 "data_offset": 0, 00:14:50.951 "data_size": 65536 00:14:50.951 }, 00:14:50.951 { 00:14:50.951 "name": "BaseBdev4", 00:14:50.951 "uuid": "dc749a7c-0e6e-574a-bf77-a6b3414e04fa", 00:14:50.951 "is_configured": true, 00:14:50.951 "data_offset": 0, 00:14:50.951 "data_size": 65536 00:14:50.951 } 00:14:50.951 ] 00:14:50.951 }' 00:14:50.951 10:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.951 10:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.951 10:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.951 10:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.951 10:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:51.209 [2024-11-15 10:43:12.136885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:51.468 [2024-11-15 10:43:12.510587] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:51.468 [2024-11-15 10:43:12.511609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:52.034 91.00 IOPS, 273.00 MiB/s [2024-11-15T10:43:13.197Z] 10:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.035 10:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.035 10:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.035 10:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.035 10:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.035 10:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.035 10:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.035 10:43:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.035 10:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.035 10:43:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.035 10:43:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.035 10:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.035 "name": "raid_bdev1", 00:14:52.035 "uuid": "656c3e9a-cc73-45f4-9409-1f20115324d4", 00:14:52.035 "strip_size_kb": 0, 00:14:52.035 "state": "online", 00:14:52.035 "raid_level": "raid1", 00:14:52.035 "superblock": false, 00:14:52.035 "num_base_bdevs": 4, 00:14:52.035 "num_base_bdevs_discovered": 3, 00:14:52.035 "num_base_bdevs_operational": 3, 00:14:52.035 "process": { 00:14:52.035 "type": "rebuild", 00:14:52.035 "target": "spare", 00:14:52.035 "progress": { 00:14:52.035 "blocks": 51200, 00:14:52.035 "percent": 78 00:14:52.035 } 00:14:52.035 }, 00:14:52.035 "base_bdevs_list": [ 00:14:52.035 { 00:14:52.035 "name": "spare", 00:14:52.035 "uuid": "fe398332-fb85-5feb-972b-79ad68583daa", 00:14:52.035 "is_configured": true, 00:14:52.035 "data_offset": 0, 00:14:52.035 "data_size": 65536 00:14:52.035 }, 00:14:52.035 { 00:14:52.035 "name": null, 00:14:52.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.035 "is_configured": false, 00:14:52.035 "data_offset": 0, 00:14:52.035 "data_size": 65536 00:14:52.035 }, 00:14:52.035 { 00:14:52.035 "name": "BaseBdev3", 00:14:52.035 "uuid": "2c08bc1a-acad-5075-96a1-421c7e536a00", 00:14:52.035 "is_configured": true, 00:14:52.035 "data_offset": 0, 00:14:52.035 "data_size": 65536 00:14:52.035 }, 00:14:52.035 { 00:14:52.035 "name": "BaseBdev4", 00:14:52.035 "uuid": "dc749a7c-0e6e-574a-bf77-a6b3414e04fa", 00:14:52.035 "is_configured": true, 00:14:52.035 "data_offset": 0, 00:14:52.035 "data_size": 65536 00:14:52.035 } 00:14:52.035 ] 00:14:52.035 }' 00:14:52.035 10:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.035 10:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.035 10:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.035 10:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.035 10:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:52.601 82.86 IOPS, 248.57 MiB/s [2024-11-15T10:43:13.763Z] [2024-11-15 10:43:13.741018] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:52.859 [2024-11-15 10:43:13.848783] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:52.859 [2024-11-15 10:43:13.851599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.117 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.117 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.117 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.117 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.117 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.117 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.117 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.117 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.117 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.117 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.117 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.117 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.117 "name": "raid_bdev1", 00:14:53.117 "uuid": "656c3e9a-cc73-45f4-9409-1f20115324d4", 00:14:53.117 "strip_size_kb": 0, 00:14:53.117 "state": "online", 00:14:53.117 "raid_level": "raid1", 00:14:53.117 "superblock": false, 00:14:53.117 "num_base_bdevs": 4, 00:14:53.117 "num_base_bdevs_discovered": 3, 00:14:53.117 "num_base_bdevs_operational": 3, 00:14:53.117 "base_bdevs_list": [ 00:14:53.117 { 00:14:53.117 "name": "spare", 00:14:53.117 "uuid": "fe398332-fb85-5feb-972b-79ad68583daa", 00:14:53.117 "is_configured": true, 00:14:53.117 "data_offset": 0, 00:14:53.118 "data_size": 65536 00:14:53.118 }, 00:14:53.118 { 00:14:53.118 "name": null, 00:14:53.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.118 "is_configured": false, 00:14:53.118 "data_offset": 0, 00:14:53.118 "data_size": 65536 00:14:53.118 }, 00:14:53.118 { 00:14:53.118 "name": "BaseBdev3", 00:14:53.118 "uuid": "2c08bc1a-acad-5075-96a1-421c7e536a00", 00:14:53.118 "is_configured": true, 00:14:53.118 "data_offset": 0, 00:14:53.118 "data_size": 65536 00:14:53.118 }, 00:14:53.118 { 00:14:53.118 "name": "BaseBdev4", 00:14:53.118 "uuid": "dc749a7c-0e6e-574a-bf77-a6b3414e04fa", 00:14:53.118 "is_configured": true, 00:14:53.118 "data_offset": 0, 00:14:53.118 "data_size": 65536 00:14:53.118 } 00:14:53.118 ] 00:14:53.118 }' 00:14:53.118 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.118 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:53.118 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.376 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:53.376 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:53.376 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:53.376 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.376 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:53.376 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:53.376 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.376 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.376 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.376 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.376 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.376 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.376 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.376 "name": "raid_bdev1", 00:14:53.376 "uuid": "656c3e9a-cc73-45f4-9409-1f20115324d4", 00:14:53.376 "strip_size_kb": 0, 00:14:53.376 "state": "online", 00:14:53.376 "raid_level": "raid1", 00:14:53.376 "superblock": false, 00:14:53.376 "num_base_bdevs": 4, 00:14:53.376 "num_base_bdevs_discovered": 3, 00:14:53.376 "num_base_bdevs_operational": 3, 00:14:53.376 "base_bdevs_list": [ 00:14:53.376 { 00:14:53.376 "name": "spare", 00:14:53.376 "uuid": "fe398332-fb85-5feb-972b-79ad68583daa", 00:14:53.376 "is_configured": true, 00:14:53.376 "data_offset": 0, 00:14:53.376 "data_size": 65536 00:14:53.376 }, 00:14:53.376 { 00:14:53.376 "name": null, 00:14:53.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.376 "is_configured": false, 00:14:53.376 "data_offset": 0, 00:14:53.376 "data_size": 65536 00:14:53.376 }, 00:14:53.376 { 00:14:53.376 "name": "BaseBdev3", 00:14:53.376 "uuid": "2c08bc1a-acad-5075-96a1-421c7e536a00", 00:14:53.376 "is_configured": true, 00:14:53.376 "data_offset": 0, 00:14:53.376 "data_size": 65536 00:14:53.376 }, 00:14:53.376 { 00:14:53.376 "name": "BaseBdev4", 00:14:53.377 "uuid": "dc749a7c-0e6e-574a-bf77-a6b3414e04fa", 00:14:53.377 "is_configured": true, 00:14:53.377 "data_offset": 0, 00:14:53.377 "data_size": 65536 00:14:53.377 } 00:14:53.377 ] 00:14:53.377 }' 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.377 "name": "raid_bdev1", 00:14:53.377 "uuid": "656c3e9a-cc73-45f4-9409-1f20115324d4", 00:14:53.377 "strip_size_kb": 0, 00:14:53.377 "state": "online", 00:14:53.377 "raid_level": "raid1", 00:14:53.377 "superblock": false, 00:14:53.377 "num_base_bdevs": 4, 00:14:53.377 "num_base_bdevs_discovered": 3, 00:14:53.377 "num_base_bdevs_operational": 3, 00:14:53.377 "base_bdevs_list": [ 00:14:53.377 { 00:14:53.377 "name": "spare", 00:14:53.377 "uuid": "fe398332-fb85-5feb-972b-79ad68583daa", 00:14:53.377 "is_configured": true, 00:14:53.377 "data_offset": 0, 00:14:53.377 "data_size": 65536 00:14:53.377 }, 00:14:53.377 { 00:14:53.377 "name": null, 00:14:53.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.377 "is_configured": false, 00:14:53.377 "data_offset": 0, 00:14:53.377 "data_size": 65536 00:14:53.377 }, 00:14:53.377 { 00:14:53.377 "name": "BaseBdev3", 00:14:53.377 "uuid": "2c08bc1a-acad-5075-96a1-421c7e536a00", 00:14:53.377 "is_configured": true, 00:14:53.377 "data_offset": 0, 00:14:53.377 "data_size": 65536 00:14:53.377 }, 00:14:53.377 { 00:14:53.377 "name": "BaseBdev4", 00:14:53.377 "uuid": "dc749a7c-0e6e-574a-bf77-a6b3414e04fa", 00:14:53.377 "is_configured": true, 00:14:53.377 "data_offset": 0, 00:14:53.377 "data_size": 65536 00:14:53.377 } 00:14:53.377 ] 00:14:53.377 }' 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.377 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.894 77.88 IOPS, 233.62 MiB/s [2024-11-15T10:43:15.056Z] 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:53.894 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.894 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.894 [2024-11-15 10:43:14.939019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.894 [2024-11-15 10:43:14.939061] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.894 00:14:53.894 Latency(us) 00:14:53.894 [2024-11-15T10:43:15.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.894 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:53.894 raid_bdev1 : 8.41 75.25 225.74 0.00 0.00 17691.96 279.27 112960.23 00:14:53.894 [2024-11-15T10:43:15.056Z] =================================================================================================================== 00:14:53.894 [2024-11-15T10:43:15.056Z] Total : 75.25 225.74 0.00 0.00 17691.96 279.27 112960.23 00:14:53.894 [2024-11-15 10:43:14.978793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.894 [2024-11-15 10:43:14.978866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.894 [2024-11-15 10:43:14.979002] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.894 [2024-11-15 10:43:14.979036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:53.894 { 00:14:53.894 "results": [ 00:14:53.894 { 00:14:53.894 "job": "raid_bdev1", 00:14:53.894 "core_mask": "0x1", 00:14:53.894 "workload": "randrw", 00:14:53.894 "percentage": 50, 00:14:53.894 "status": "finished", 00:14:53.894 "queue_depth": 2, 00:14:53.894 "io_size": 3145728, 00:14:53.894 "runtime": 8.412387, 00:14:53.894 "iops": 75.24618161290012, 00:14:53.894 "mibps": 225.73854483870036, 00:14:53.894 "io_failed": 0, 00:14:53.894 "io_timeout": 0, 00:14:53.894 "avg_latency_us": 17691.95668533678, 00:14:53.894 "min_latency_us": 279.27272727272725, 00:14:53.894 "max_latency_us": 112960.23272727273 00:14:53.894 } 00:14:53.894 ], 00:14:53.894 "core_count": 1 00:14:53.894 } 00:14:53.894 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.894 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.894 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.894 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.894 10:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:53.894 10:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.894 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:53.894 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:53.894 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:53.894 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:53.894 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.894 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:53.894 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:53.894 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:53.894 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:53.894 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:53.894 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:53.894 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.895 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:54.473 /dev/nbd0 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.473 1+0 records in 00:14:54.473 1+0 records out 00:14:54.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363795 s, 11.3 MB/s 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.473 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:54.787 /dev/nbd1 00:14:54.787 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:54.787 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:54.787 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:54.787 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:54.787 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:54.787 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.788 1+0 records in 00:14:54.788 1+0 records out 00:14:54.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354387 s, 11.6 MB/s 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.788 10:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:55.355 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:55.355 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:55.355 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:55.355 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.355 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.355 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:55.355 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:55.355 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.355 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:55.355 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:55.355 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:55.356 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.356 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:55.356 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:55.356 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:55.356 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:55.356 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:55.356 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:55.356 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.356 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:55.614 /dev/nbd1 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.615 1+0 records in 00:14:55.615 1+0 records out 00:14:55.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286245 s, 14.3 MB/s 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.615 10:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:55.873 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:55.873 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:55.873 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:55.873 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.131 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.131 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:56.131 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:56.131 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.131 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:56.131 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:56.131 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:56.131 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:56.131 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:56.131 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.131 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79016 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79016 ']' 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79016 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79016 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:56.390 killing process with pid 79016 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79016' 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79016 00:14:56.390 Received shutdown signal, test time was about 10.832779 seconds 00:14:56.390 00:14:56.390 Latency(us) 00:14:56.390 [2024-11-15T10:43:17.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.390 [2024-11-15T10:43:17.552Z] =================================================================================================================== 00:14:56.390 [2024-11-15T10:43:17.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:56.390 [2024-11-15 10:43:17.379297] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.390 10:43:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79016 00:14:56.649 [2024-11-15 10:43:17.752522] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:58.023 00:14:58.023 real 0m14.441s 00:14:58.023 user 0m19.029s 00:14:58.023 sys 0m1.836s 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.023 ************************************ 00:14:58.023 END TEST raid_rebuild_test_io 00:14:58.023 ************************************ 00:14:58.023 10:43:18 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:58.023 10:43:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:58.023 10:43:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.023 10:43:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.023 ************************************ 00:14:58.023 START TEST raid_rebuild_test_sb_io 00:14:58.023 ************************************ 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79437 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79437 00:14:58.023 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79437 ']' 00:14:58.024 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:58.024 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.024 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.024 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.024 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.024 10:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.024 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:58.024 Zero copy mechanism will not be used. 00:14:58.024 [2024-11-15 10:43:19.028855] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:14:58.024 [2024-11-15 10:43:19.029033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79437 ] 00:14:58.281 [2024-11-15 10:43:19.217384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.281 [2024-11-15 10:43:19.374666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.539 [2024-11-15 10:43:19.599611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.539 [2024-11-15 10:43:19.599681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.106 BaseBdev1_malloc 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.106 [2024-11-15 10:43:20.092936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:59.106 [2024-11-15 10:43:20.093011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.106 [2024-11-15 10:43:20.093045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:59.106 [2024-11-15 10:43:20.093065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.106 [2024-11-15 10:43:20.096249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.106 [2024-11-15 10:43:20.096309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:59.106 BaseBdev1 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.106 BaseBdev2_malloc 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.106 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.107 [2024-11-15 10:43:20.144928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:59.107 [2024-11-15 10:43:20.145001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.107 [2024-11-15 10:43:20.145029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:59.107 [2024-11-15 10:43:20.145048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.107 [2024-11-15 10:43:20.147767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.107 [2024-11-15 10:43:20.147815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:59.107 BaseBdev2 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.107 BaseBdev3_malloc 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.107 [2024-11-15 10:43:20.198881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:59.107 [2024-11-15 10:43:20.198948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.107 [2024-11-15 10:43:20.198979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:59.107 [2024-11-15 10:43:20.198998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.107 [2024-11-15 10:43:20.201766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.107 [2024-11-15 10:43:20.201812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:59.107 BaseBdev3 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.107 BaseBdev4_malloc 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.107 [2024-11-15 10:43:20.250604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:59.107 [2024-11-15 10:43:20.250670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.107 [2024-11-15 10:43:20.250698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:59.107 [2024-11-15 10:43:20.250716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.107 [2024-11-15 10:43:20.253421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.107 [2024-11-15 10:43:20.253469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:59.107 BaseBdev4 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.107 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.366 spare_malloc 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.366 spare_delay 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.366 [2024-11-15 10:43:20.310303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:59.366 [2024-11-15 10:43:20.310369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.366 [2024-11-15 10:43:20.310401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:59.366 [2024-11-15 10:43:20.310419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.366 [2024-11-15 10:43:20.313166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.366 [2024-11-15 10:43:20.313211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:59.366 spare 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.366 [2024-11-15 10:43:20.318360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.366 [2024-11-15 10:43:20.320787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.366 [2024-11-15 10:43:20.320890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.366 [2024-11-15 10:43:20.320972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.366 [2024-11-15 10:43:20.321206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:59.366 [2024-11-15 10:43:20.321246] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:59.366 [2024-11-15 10:43:20.321571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:59.366 [2024-11-15 10:43:20.321826] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:59.366 [2024-11-15 10:43:20.321852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:59.366 [2024-11-15 10:43:20.322040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.366 "name": "raid_bdev1", 00:14:59.366 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:14:59.366 "strip_size_kb": 0, 00:14:59.366 "state": "online", 00:14:59.366 "raid_level": "raid1", 00:14:59.366 "superblock": true, 00:14:59.366 "num_base_bdevs": 4, 00:14:59.366 "num_base_bdevs_discovered": 4, 00:14:59.366 "num_base_bdevs_operational": 4, 00:14:59.366 "base_bdevs_list": [ 00:14:59.366 { 00:14:59.366 "name": "BaseBdev1", 00:14:59.366 "uuid": "f1bd8f6d-43c9-52e4-bb57-99787b6abaed", 00:14:59.366 "is_configured": true, 00:14:59.366 "data_offset": 2048, 00:14:59.366 "data_size": 63488 00:14:59.366 }, 00:14:59.366 { 00:14:59.366 "name": "BaseBdev2", 00:14:59.366 "uuid": "73eac810-1d2a-51b5-b2a0-d8f5c7dc170d", 00:14:59.366 "is_configured": true, 00:14:59.366 "data_offset": 2048, 00:14:59.366 "data_size": 63488 00:14:59.366 }, 00:14:59.366 { 00:14:59.366 "name": "BaseBdev3", 00:14:59.366 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:14:59.366 "is_configured": true, 00:14:59.366 "data_offset": 2048, 00:14:59.366 "data_size": 63488 00:14:59.366 }, 00:14:59.366 { 00:14:59.366 "name": "BaseBdev4", 00:14:59.366 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:14:59.366 "is_configured": true, 00:14:59.366 "data_offset": 2048, 00:14:59.366 "data_size": 63488 00:14:59.366 } 00:14:59.366 ] 00:14:59.366 }' 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.366 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:59.934 [2024-11-15 10:43:20.822933] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:59.934 [2024-11-15 10:43:20.930479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.934 "name": "raid_bdev1", 00:14:59.934 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:14:59.934 "strip_size_kb": 0, 00:14:59.934 "state": "online", 00:14:59.934 "raid_level": "raid1", 00:14:59.934 "superblock": true, 00:14:59.934 "num_base_bdevs": 4, 00:14:59.934 "num_base_bdevs_discovered": 3, 00:14:59.934 "num_base_bdevs_operational": 3, 00:14:59.934 "base_bdevs_list": [ 00:14:59.934 { 00:14:59.934 "name": null, 00:14:59.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.934 "is_configured": false, 00:14:59.934 "data_offset": 0, 00:14:59.934 "data_size": 63488 00:14:59.934 }, 00:14:59.934 { 00:14:59.934 "name": "BaseBdev2", 00:14:59.934 "uuid": "73eac810-1d2a-51b5-b2a0-d8f5c7dc170d", 00:14:59.934 "is_configured": true, 00:14:59.934 "data_offset": 2048, 00:14:59.934 "data_size": 63488 00:14:59.934 }, 00:14:59.934 { 00:14:59.934 "name": "BaseBdev3", 00:14:59.934 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:14:59.934 "is_configured": true, 00:14:59.934 "data_offset": 2048, 00:14:59.934 "data_size": 63488 00:14:59.934 }, 00:14:59.934 { 00:14:59.934 "name": "BaseBdev4", 00:14:59.934 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:14:59.934 "is_configured": true, 00:14:59.934 "data_offset": 2048, 00:14:59.934 "data_size": 63488 00:14:59.934 } 00:14:59.934 ] 00:14:59.934 }' 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.934 10:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.934 [2024-11-15 10:43:21.062502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:59.934 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:59.934 Zero copy mechanism will not be used. 00:14:59.934 Running I/O for 60 seconds... 00:15:00.501 10:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:00.501 10:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.501 10:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.501 [2024-11-15 10:43:21.425228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.501 10:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.501 10:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:00.501 [2024-11-15 10:43:21.499096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:00.501 [2024-11-15 10:43:21.501732] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:00.501 [2024-11-15 10:43:21.613137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:00.501 [2024-11-15 10:43:21.613810] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:00.759 [2024-11-15 10:43:21.733705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:00.759 [2024-11-15 10:43:21.734595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:01.017 119.00 IOPS, 357.00 MiB/s [2024-11-15T10:43:22.179Z] [2024-11-15 10:43:22.078594] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:01.276 [2024-11-15 10:43:22.199049] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:01.276 [2024-11-15 10:43:22.206889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:01.534 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.534 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.534 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.534 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.534 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.534 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.534 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.534 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.534 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.534 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.534 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.534 "name": "raid_bdev1", 00:15:01.534 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:01.534 "strip_size_kb": 0, 00:15:01.535 "state": "online", 00:15:01.535 "raid_level": "raid1", 00:15:01.535 "superblock": true, 00:15:01.535 "num_base_bdevs": 4, 00:15:01.535 "num_base_bdevs_discovered": 4, 00:15:01.535 "num_base_bdevs_operational": 4, 00:15:01.535 "process": { 00:15:01.535 "type": "rebuild", 00:15:01.535 "target": "spare", 00:15:01.535 "progress": { 00:15:01.535 "blocks": 12288, 00:15:01.535 "percent": 19 00:15:01.535 } 00:15:01.535 }, 00:15:01.535 "base_bdevs_list": [ 00:15:01.535 { 00:15:01.535 "name": "spare", 00:15:01.535 "uuid": "cb106fee-f602-558f-b8a7-baf9a5b2f61d", 00:15:01.535 "is_configured": true, 00:15:01.535 "data_offset": 2048, 00:15:01.535 "data_size": 63488 00:15:01.535 }, 00:15:01.535 { 00:15:01.535 "name": "BaseBdev2", 00:15:01.535 "uuid": "73eac810-1d2a-51b5-b2a0-d8f5c7dc170d", 00:15:01.535 "is_configured": true, 00:15:01.535 "data_offset": 2048, 00:15:01.535 "data_size": 63488 00:15:01.535 }, 00:15:01.535 { 00:15:01.535 "name": "BaseBdev3", 00:15:01.535 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:01.535 "is_configured": true, 00:15:01.535 "data_offset": 2048, 00:15:01.535 "data_size": 63488 00:15:01.535 }, 00:15:01.535 { 00:15:01.535 "name": "BaseBdev4", 00:15:01.535 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:01.535 "is_configured": true, 00:15:01.535 "data_offset": 2048, 00:15:01.535 "data_size": 63488 00:15:01.535 } 00:15:01.535 ] 00:15:01.535 }' 00:15:01.535 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.535 [2024-11-15 10:43:22.563168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:01.535 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.535 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.535 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.535 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:01.535 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.535 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.535 [2024-11-15 10:43:22.638688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.535 [2024-11-15 10:43:22.682162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:01.535 [2024-11-15 10:43:22.682963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:01.793 [2024-11-15 10:43:22.794300] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:01.793 [2024-11-15 10:43:22.798378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.793 [2024-11-15 10:43:22.798434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.793 [2024-11-15 10:43:22.798454] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:01.793 [2024-11-15 10:43:22.839561] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.793 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.793 "name": "raid_bdev1", 00:15:01.793 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:01.793 "strip_size_kb": 0, 00:15:01.793 "state": "online", 00:15:01.793 "raid_level": "raid1", 00:15:01.793 "superblock": true, 00:15:01.793 "num_base_bdevs": 4, 00:15:01.793 "num_base_bdevs_discovered": 3, 00:15:01.793 "num_base_bdevs_operational": 3, 00:15:01.793 "base_bdevs_list": [ 00:15:01.793 { 00:15:01.793 "name": null, 00:15:01.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.793 "is_configured": false, 00:15:01.793 "data_offset": 0, 00:15:01.793 "data_size": 63488 00:15:01.793 }, 00:15:01.793 { 00:15:01.793 "name": "BaseBdev2", 00:15:01.793 "uuid": "73eac810-1d2a-51b5-b2a0-d8f5c7dc170d", 00:15:01.793 "is_configured": true, 00:15:01.793 "data_offset": 2048, 00:15:01.793 "data_size": 63488 00:15:01.793 }, 00:15:01.793 { 00:15:01.793 "name": "BaseBdev3", 00:15:01.793 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:01.794 "is_configured": true, 00:15:01.794 "data_offset": 2048, 00:15:01.794 "data_size": 63488 00:15:01.794 }, 00:15:01.794 { 00:15:01.794 "name": "BaseBdev4", 00:15:01.794 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:01.794 "is_configured": true, 00:15:01.794 "data_offset": 2048, 00:15:01.794 "data_size": 63488 00:15:01.794 } 00:15:01.794 ] 00:15:01.794 }' 00:15:01.794 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.794 10:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.308 108.50 IOPS, 325.50 MiB/s [2024-11-15T10:43:23.470Z] 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.308 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.308 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.308 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.308 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.308 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.308 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.308 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.308 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.308 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.308 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.308 "name": "raid_bdev1", 00:15:02.308 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:02.308 "strip_size_kb": 0, 00:15:02.308 "state": "online", 00:15:02.308 "raid_level": "raid1", 00:15:02.308 "superblock": true, 00:15:02.308 "num_base_bdevs": 4, 00:15:02.308 "num_base_bdevs_discovered": 3, 00:15:02.308 "num_base_bdevs_operational": 3, 00:15:02.308 "base_bdevs_list": [ 00:15:02.308 { 00:15:02.308 "name": null, 00:15:02.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.308 "is_configured": false, 00:15:02.308 "data_offset": 0, 00:15:02.308 "data_size": 63488 00:15:02.308 }, 00:15:02.308 { 00:15:02.308 "name": "BaseBdev2", 00:15:02.308 "uuid": "73eac810-1d2a-51b5-b2a0-d8f5c7dc170d", 00:15:02.308 "is_configured": true, 00:15:02.308 "data_offset": 2048, 00:15:02.308 "data_size": 63488 00:15:02.308 }, 00:15:02.308 { 00:15:02.308 "name": "BaseBdev3", 00:15:02.308 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:02.308 "is_configured": true, 00:15:02.308 "data_offset": 2048, 00:15:02.308 "data_size": 63488 00:15:02.308 }, 00:15:02.308 { 00:15:02.308 "name": "BaseBdev4", 00:15:02.308 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:02.308 "is_configured": true, 00:15:02.308 "data_offset": 2048, 00:15:02.308 "data_size": 63488 00:15:02.308 } 00:15:02.308 ] 00:15:02.308 }' 00:15:02.308 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.308 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.309 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.565 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.565 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:02.565 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.565 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.565 [2024-11-15 10:43:23.515841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.565 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.565 10:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:02.566 [2024-11-15 10:43:23.590993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:02.566 [2024-11-15 10:43:23.593628] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.566 [2024-11-15 10:43:23.716318] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:02.823 [2024-11-15 10:43:23.950757] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:02.823 [2024-11-15 10:43:23.951615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:03.340 120.67 IOPS, 362.00 MiB/s [2024-11-15T10:43:24.502Z] [2024-11-15 10:43:24.327783] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:03.597 [2024-11-15 10:43:24.553147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.597 "name": "raid_bdev1", 00:15:03.597 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:03.597 "strip_size_kb": 0, 00:15:03.597 "state": "online", 00:15:03.597 "raid_level": "raid1", 00:15:03.597 "superblock": true, 00:15:03.597 "num_base_bdevs": 4, 00:15:03.597 "num_base_bdevs_discovered": 4, 00:15:03.597 "num_base_bdevs_operational": 4, 00:15:03.597 "process": { 00:15:03.597 "type": "rebuild", 00:15:03.597 "target": "spare", 00:15:03.597 "progress": { 00:15:03.597 "blocks": 10240, 00:15:03.597 "percent": 16 00:15:03.597 } 00:15:03.597 }, 00:15:03.597 "base_bdevs_list": [ 00:15:03.597 { 00:15:03.597 "name": "spare", 00:15:03.597 "uuid": "cb106fee-f602-558f-b8a7-baf9a5b2f61d", 00:15:03.597 "is_configured": true, 00:15:03.597 "data_offset": 2048, 00:15:03.597 "data_size": 63488 00:15:03.597 }, 00:15:03.597 { 00:15:03.597 "name": "BaseBdev2", 00:15:03.597 "uuid": "73eac810-1d2a-51b5-b2a0-d8f5c7dc170d", 00:15:03.597 "is_configured": true, 00:15:03.597 "data_offset": 2048, 00:15:03.597 "data_size": 63488 00:15:03.597 }, 00:15:03.597 { 00:15:03.597 "name": "BaseBdev3", 00:15:03.597 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:03.597 "is_configured": true, 00:15:03.597 "data_offset": 2048, 00:15:03.597 "data_size": 63488 00:15:03.597 }, 00:15:03.597 { 00:15:03.597 "name": "BaseBdev4", 00:15:03.597 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:03.597 "is_configured": true, 00:15:03.597 "data_offset": 2048, 00:15:03.597 "data_size": 63488 00:15:03.597 } 00:15:03.597 ] 00:15:03.597 }' 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:03.597 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:03.597 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:03.598 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.598 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.598 [2024-11-15 10:43:24.724305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:03.887 [2024-11-15 10:43:24.979512] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:03.887 [2024-11-15 10:43:24.979597] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:03.887 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.887 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:03.887 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:03.887 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.887 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.887 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.887 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.887 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.887 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.887 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.887 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.887 10:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.887 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.164 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.164 "name": "raid_bdev1", 00:15:04.164 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:04.164 "strip_size_kb": 0, 00:15:04.164 "state": "online", 00:15:04.164 "raid_level": "raid1", 00:15:04.164 "superblock": true, 00:15:04.164 "num_base_bdevs": 4, 00:15:04.164 "num_base_bdevs_discovered": 3, 00:15:04.164 "num_base_bdevs_operational": 3, 00:15:04.164 "process": { 00:15:04.164 "type": "rebuild", 00:15:04.164 "target": "spare", 00:15:04.164 "progress": { 00:15:04.164 "blocks": 12288, 00:15:04.164 "percent": 19 00:15:04.164 } 00:15:04.164 }, 00:15:04.164 "base_bdevs_list": [ 00:15:04.164 { 00:15:04.164 "name": "spare", 00:15:04.164 "uuid": "cb106fee-f602-558f-b8a7-baf9a5b2f61d", 00:15:04.164 "is_configured": true, 00:15:04.164 "data_offset": 2048, 00:15:04.164 "data_size": 63488 00:15:04.164 }, 00:15:04.164 { 00:15:04.164 "name": null, 00:15:04.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.164 "is_configured": false, 00:15:04.164 "data_offset": 0, 00:15:04.164 "data_size": 63488 00:15:04.164 }, 00:15:04.164 { 00:15:04.164 "name": "BaseBdev3", 00:15:04.164 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:04.164 "is_configured": true, 00:15:04.164 "data_offset": 2048, 00:15:04.164 "data_size": 63488 00:15:04.164 }, 00:15:04.164 { 00:15:04.164 "name": "BaseBdev4", 00:15:04.164 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:04.164 "is_configured": true, 00:15:04.164 "data_offset": 2048, 00:15:04.164 "data_size": 63488 00:15:04.164 } 00:15:04.164 ] 00:15:04.164 }' 00:15:04.164 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.165 110.25 IOPS, 330.75 MiB/s [2024-11-15T10:43:25.327Z] 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.165 [2024-11-15 10:43:25.119871] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=534 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.165 "name": "raid_bdev1", 00:15:04.165 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:04.165 "strip_size_kb": 0, 00:15:04.165 "state": "online", 00:15:04.165 "raid_level": "raid1", 00:15:04.165 "superblock": true, 00:15:04.165 "num_base_bdevs": 4, 00:15:04.165 "num_base_bdevs_discovered": 3, 00:15:04.165 "num_base_bdevs_operational": 3, 00:15:04.165 "process": { 00:15:04.165 "type": "rebuild", 00:15:04.165 "target": "spare", 00:15:04.165 "progress": { 00:15:04.165 "blocks": 14336, 00:15:04.165 "percent": 22 00:15:04.165 } 00:15:04.165 }, 00:15:04.165 "base_bdevs_list": [ 00:15:04.165 { 00:15:04.165 "name": "spare", 00:15:04.165 "uuid": "cb106fee-f602-558f-b8a7-baf9a5b2f61d", 00:15:04.165 "is_configured": true, 00:15:04.165 "data_offset": 2048, 00:15:04.165 "data_size": 63488 00:15:04.165 }, 00:15:04.165 { 00:15:04.165 "name": null, 00:15:04.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.165 "is_configured": false, 00:15:04.165 "data_offset": 0, 00:15:04.165 "data_size": 63488 00:15:04.165 }, 00:15:04.165 { 00:15:04.165 "name": "BaseBdev3", 00:15:04.165 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:04.165 "is_configured": true, 00:15:04.165 "data_offset": 2048, 00:15:04.165 "data_size": 63488 00:15:04.165 }, 00:15:04.165 { 00:15:04.165 "name": "BaseBdev4", 00:15:04.165 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:04.165 "is_configured": true, 00:15:04.165 "data_offset": 2048, 00:15:04.165 "data_size": 63488 00:15:04.165 } 00:15:04.165 ] 00:15:04.165 }' 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.165 10:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.424 [2024-11-15 10:43:25.368600] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:04.682 [2024-11-15 10:43:25.631304] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:04.682 [2024-11-15 10:43:25.632539] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:04.940 [2024-11-15 10:43:25.862628] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:04.940 [2024-11-15 10:43:25.863240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:05.199 98.20 IOPS, 294.60 MiB/s [2024-11-15T10:43:26.361Z] 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.199 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.199 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.199 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.199 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.199 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.199 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.199 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.199 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.199 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.199 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.199 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.199 "name": "raid_bdev1", 00:15:05.199 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:05.199 "strip_size_kb": 0, 00:15:05.199 "state": "online", 00:15:05.199 "raid_level": "raid1", 00:15:05.199 "superblock": true, 00:15:05.199 "num_base_bdevs": 4, 00:15:05.199 "num_base_bdevs_discovered": 3, 00:15:05.199 "num_base_bdevs_operational": 3, 00:15:05.199 "process": { 00:15:05.199 "type": "rebuild", 00:15:05.199 "target": "spare", 00:15:05.199 "progress": { 00:15:05.199 "blocks": 26624, 00:15:05.199 "percent": 41 00:15:05.199 } 00:15:05.199 }, 00:15:05.199 "base_bdevs_list": [ 00:15:05.199 { 00:15:05.199 "name": "spare", 00:15:05.199 "uuid": "cb106fee-f602-558f-b8a7-baf9a5b2f61d", 00:15:05.199 "is_configured": true, 00:15:05.199 "data_offset": 2048, 00:15:05.199 "data_size": 63488 00:15:05.199 }, 00:15:05.199 { 00:15:05.199 "name": null, 00:15:05.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.199 "is_configured": false, 00:15:05.199 "data_offset": 0, 00:15:05.199 "data_size": 63488 00:15:05.199 }, 00:15:05.199 { 00:15:05.199 "name": "BaseBdev3", 00:15:05.199 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:05.199 "is_configured": true, 00:15:05.199 "data_offset": 2048, 00:15:05.199 "data_size": 63488 00:15:05.199 }, 00:15:05.199 { 00:15:05.199 "name": "BaseBdev4", 00:15:05.199 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:05.199 "is_configured": true, 00:15:05.199 "data_offset": 2048, 00:15:05.199 "data_size": 63488 00:15:05.199 } 00:15:05.199 ] 00:15:05.199 }' 00:15:05.199 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.457 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.457 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.457 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.457 10:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.024 [2024-11-15 10:43:26.928797] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:06.024 [2024-11-15 10:43:26.929915] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:06.590 89.83 IOPS, 269.50 MiB/s [2024-11-15T10:43:27.753Z] 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.591 "name": "raid_bdev1", 00:15:06.591 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:06.591 "strip_size_kb": 0, 00:15:06.591 "state": "online", 00:15:06.591 "raid_level": "raid1", 00:15:06.591 "superblock": true, 00:15:06.591 "num_base_bdevs": 4, 00:15:06.591 "num_base_bdevs_discovered": 3, 00:15:06.591 "num_base_bdevs_operational": 3, 00:15:06.591 "process": { 00:15:06.591 "type": "rebuild", 00:15:06.591 "target": "spare", 00:15:06.591 "progress": { 00:15:06.591 "blocks": 45056, 00:15:06.591 "percent": 70 00:15:06.591 } 00:15:06.591 }, 00:15:06.591 "base_bdevs_list": [ 00:15:06.591 { 00:15:06.591 "name": "spare", 00:15:06.591 "uuid": "cb106fee-f602-558f-b8a7-baf9a5b2f61d", 00:15:06.591 "is_configured": true, 00:15:06.591 "data_offset": 2048, 00:15:06.591 "data_size": 63488 00:15:06.591 }, 00:15:06.591 { 00:15:06.591 "name": null, 00:15:06.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.591 "is_configured": false, 00:15:06.591 "data_offset": 0, 00:15:06.591 "data_size": 63488 00:15:06.591 }, 00:15:06.591 { 00:15:06.591 "name": "BaseBdev3", 00:15:06.591 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:06.591 "is_configured": true, 00:15:06.591 "data_offset": 2048, 00:15:06.591 "data_size": 63488 00:15:06.591 }, 00:15:06.591 { 00:15:06.591 "name": "BaseBdev4", 00:15:06.591 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:06.591 "is_configured": true, 00:15:06.591 "data_offset": 2048, 00:15:06.591 "data_size": 63488 00:15:06.591 } 00:15:06.591 ] 00:15:06.591 }' 00:15:06.591 [2024-11-15 10:43:27.511599] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.591 10:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.191 82.43 IOPS, 247.29 MiB/s [2024-11-15T10:43:28.353Z] [2024-11-15 10:43:28.094564] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:07.191 [2024-11-15 10:43:28.299189] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:07.472 [2024-11-15 10:43:28.540693] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:07.472 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.472 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.472 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.472 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.472 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.472 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.472 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.472 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.472 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.472 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.730 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.730 [2024-11-15 10:43:28.648371] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:07.730 [2024-11-15 10:43:28.651597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.730 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.730 "name": "raid_bdev1", 00:15:07.730 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:07.730 "strip_size_kb": 0, 00:15:07.730 "state": "online", 00:15:07.730 "raid_level": "raid1", 00:15:07.730 "superblock": true, 00:15:07.730 "num_base_bdevs": 4, 00:15:07.730 "num_base_bdevs_discovered": 3, 00:15:07.730 "num_base_bdevs_operational": 3, 00:15:07.730 "process": { 00:15:07.730 "type": "rebuild", 00:15:07.730 "target": "spare", 00:15:07.730 "progress": { 00:15:07.730 "blocks": 63488, 00:15:07.730 "percent": 100 00:15:07.730 } 00:15:07.730 }, 00:15:07.730 "base_bdevs_list": [ 00:15:07.730 { 00:15:07.730 "name": "spare", 00:15:07.730 "uuid": "cb106fee-f602-558f-b8a7-baf9a5b2f61d", 00:15:07.730 "is_configured": true, 00:15:07.730 "data_offset": 2048, 00:15:07.730 "data_size": 63488 00:15:07.730 }, 00:15:07.730 { 00:15:07.730 "name": null, 00:15:07.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.730 "is_configured": false, 00:15:07.730 "data_offset": 0, 00:15:07.730 "data_size": 63488 00:15:07.730 }, 00:15:07.730 { 00:15:07.730 "name": "BaseBdev3", 00:15:07.730 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:07.730 "is_configured": true, 00:15:07.730 "data_offset": 2048, 00:15:07.730 "data_size": 63488 00:15:07.730 }, 00:15:07.730 { 00:15:07.730 "name": "BaseBdev4", 00:15:07.730 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:07.730 "is_configured": true, 00:15:07.730 "data_offset": 2048, 00:15:07.730 "data_size": 63488 00:15:07.730 } 00:15:07.730 ] 00:15:07.730 }' 00:15:07.730 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.730 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.730 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.730 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.730 10:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.924 75.88 IOPS, 227.62 MiB/s [2024-11-15T10:43:30.086Z] 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.924 "name": "raid_bdev1", 00:15:08.924 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:08.924 "strip_size_kb": 0, 00:15:08.924 "state": "online", 00:15:08.924 "raid_level": "raid1", 00:15:08.924 "superblock": true, 00:15:08.924 "num_base_bdevs": 4, 00:15:08.924 "num_base_bdevs_discovered": 3, 00:15:08.924 "num_base_bdevs_operational": 3, 00:15:08.924 "base_bdevs_list": [ 00:15:08.924 { 00:15:08.924 "name": "spare", 00:15:08.924 "uuid": "cb106fee-f602-558f-b8a7-baf9a5b2f61d", 00:15:08.924 "is_configured": true, 00:15:08.924 "data_offset": 2048, 00:15:08.924 "data_size": 63488 00:15:08.924 }, 00:15:08.924 { 00:15:08.924 "name": null, 00:15:08.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.924 "is_configured": false, 00:15:08.924 "data_offset": 0, 00:15:08.924 "data_size": 63488 00:15:08.924 }, 00:15:08.924 { 00:15:08.924 "name": "BaseBdev3", 00:15:08.924 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:08.924 "is_configured": true, 00:15:08.924 "data_offset": 2048, 00:15:08.924 "data_size": 63488 00:15:08.924 }, 00:15:08.924 { 00:15:08.924 "name": "BaseBdev4", 00:15:08.924 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:08.924 "is_configured": true, 00:15:08.924 "data_offset": 2048, 00:15:08.924 "data_size": 63488 00:15:08.924 } 00:15:08.924 ] 00:15:08.924 }' 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.924 10:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.924 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.924 "name": "raid_bdev1", 00:15:08.924 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:08.924 "strip_size_kb": 0, 00:15:08.924 "state": "online", 00:15:08.924 "raid_level": "raid1", 00:15:08.924 "superblock": true, 00:15:08.924 "num_base_bdevs": 4, 00:15:08.924 "num_base_bdevs_discovered": 3, 00:15:08.924 "num_base_bdevs_operational": 3, 00:15:08.924 "base_bdevs_list": [ 00:15:08.924 { 00:15:08.924 "name": "spare", 00:15:08.924 "uuid": "cb106fee-f602-558f-b8a7-baf9a5b2f61d", 00:15:08.924 "is_configured": true, 00:15:08.924 "data_offset": 2048, 00:15:08.924 "data_size": 63488 00:15:08.924 }, 00:15:08.924 { 00:15:08.924 "name": null, 00:15:08.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.924 "is_configured": false, 00:15:08.924 "data_offset": 0, 00:15:08.924 "data_size": 63488 00:15:08.924 }, 00:15:08.924 { 00:15:08.924 "name": "BaseBdev3", 00:15:08.924 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:08.924 "is_configured": true, 00:15:08.924 "data_offset": 2048, 00:15:08.924 "data_size": 63488 00:15:08.924 }, 00:15:08.924 { 00:15:08.924 "name": "BaseBdev4", 00:15:08.924 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:08.924 "is_configured": true, 00:15:08.924 "data_offset": 2048, 00:15:08.924 "data_size": 63488 00:15:08.924 } 00:15:08.924 ] 00:15:08.924 }' 00:15:08.924 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.924 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.924 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.183 71.33 IOPS, 214.00 MiB/s [2024-11-15T10:43:30.345Z] 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.183 "name": "raid_bdev1", 00:15:09.183 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:09.183 "strip_size_kb": 0, 00:15:09.183 "state": "online", 00:15:09.183 "raid_level": "raid1", 00:15:09.183 "superblock": true, 00:15:09.183 "num_base_bdevs": 4, 00:15:09.183 "num_base_bdevs_discovered": 3, 00:15:09.183 "num_base_bdevs_operational": 3, 00:15:09.183 "base_bdevs_list": [ 00:15:09.183 { 00:15:09.183 "name": "spare", 00:15:09.183 "uuid": "cb106fee-f602-558f-b8a7-baf9a5b2f61d", 00:15:09.183 "is_configured": true, 00:15:09.183 "data_offset": 2048, 00:15:09.183 "data_size": 63488 00:15:09.183 }, 00:15:09.183 { 00:15:09.183 "name": null, 00:15:09.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.183 "is_configured": false, 00:15:09.183 "data_offset": 0, 00:15:09.183 "data_size": 63488 00:15:09.183 }, 00:15:09.183 { 00:15:09.183 "name": "BaseBdev3", 00:15:09.183 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:09.183 "is_configured": true, 00:15:09.183 "data_offset": 2048, 00:15:09.183 "data_size": 63488 00:15:09.183 }, 00:15:09.183 { 00:15:09.183 "name": "BaseBdev4", 00:15:09.183 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:09.183 "is_configured": true, 00:15:09.183 "data_offset": 2048, 00:15:09.183 "data_size": 63488 00:15:09.183 } 00:15:09.183 ] 00:15:09.183 }' 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.183 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.749 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.749 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.750 [2024-11-15 10:43:30.635459] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.750 [2024-11-15 10:43:30.635513] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.750 00:15:09.750 Latency(us) 00:15:09.750 [2024-11-15T10:43:30.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.750 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:09.750 raid_bdev1 : 9.67 68.25 204.75 0.00 0.00 20190.71 290.44 119632.99 00:15:09.750 [2024-11-15T10:43:30.912Z] =================================================================================================================== 00:15:09.750 [2024-11-15T10:43:30.912Z] Total : 68.25 204.75 0.00 0.00 20190.71 290.44 119632.99 00:15:09.750 [2024-11-15 10:43:30.754958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.750 [2024-11-15 10:43:30.755024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.750 [2024-11-15 10:43:30.755147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.750 [2024-11-15 10:43:30.755169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:09.750 { 00:15:09.750 "results": [ 00:15:09.750 { 00:15:09.750 "job": "raid_bdev1", 00:15:09.750 "core_mask": "0x1", 00:15:09.750 "workload": "randrw", 00:15:09.750 "percentage": 50, 00:15:09.750 "status": "finished", 00:15:09.750 "queue_depth": 2, 00:15:09.750 "io_size": 3145728, 00:15:09.750 "runtime": 9.670293, 00:15:09.750 "iops": 68.25025880808369, 00:15:09.750 "mibps": 204.75077642425106, 00:15:09.750 "io_failed": 0, 00:15:09.750 "io_timeout": 0, 00:15:09.750 "avg_latency_us": 20190.714358126723, 00:15:09.750 "min_latency_us": 290.44363636363636, 00:15:09.750 "max_latency_us": 119632.98909090909 00:15:09.750 } 00:15:09.750 ], 00:15:09.750 "core_count": 1 00:15:09.750 } 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.750 10:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:10.009 /dev/nbd0 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.009 1+0 records in 00:15:10.009 1+0 records out 00:15:10.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311146 s, 13.2 MB/s 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.009 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:10.575 /dev/nbd1 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.575 1+0 records in 00:15:10.575 1+0 records out 00:15:10.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272965 s, 15.0 MB/s 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.575 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:11.142 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:11.142 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:11.142 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:11.142 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.142 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.142 10:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:11.142 /dev/nbd1 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.142 1+0 records in 00:15:11.142 1+0 records out 00:15:11.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396074 s, 10.3 MB/s 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:11.142 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:11.401 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:11.401 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.401 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:11.401 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:11.401 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:11.401 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.401 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.660 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.919 [2024-11-15 10:43:32.993736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:11.919 [2024-11-15 10:43:32.993802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.919 [2024-11-15 10:43:32.993831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:11.919 [2024-11-15 10:43:32.993849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.919 [2024-11-15 10:43:32.996755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.919 [2024-11-15 10:43:32.996806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:11.919 [2024-11-15 10:43:32.996920] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:11.919 [2024-11-15 10:43:32.997003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.919 [2024-11-15 10:43:32.997185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:11.919 [2024-11-15 10:43:32.997337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:11.919 spare 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.919 10:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.178 [2024-11-15 10:43:33.097470] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:12.178 [2024-11-15 10:43:33.097532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:12.178 [2024-11-15 10:43:33.097940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:12.178 [2024-11-15 10:43:33.098182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:12.178 [2024-11-15 10:43:33.098204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:12.178 [2024-11-15 10:43:33.098443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.178 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.178 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:12.178 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.178 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.179 "name": "raid_bdev1", 00:15:12.179 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:12.179 "strip_size_kb": 0, 00:15:12.179 "state": "online", 00:15:12.179 "raid_level": "raid1", 00:15:12.179 "superblock": true, 00:15:12.179 "num_base_bdevs": 4, 00:15:12.179 "num_base_bdevs_discovered": 3, 00:15:12.179 "num_base_bdevs_operational": 3, 00:15:12.179 "base_bdevs_list": [ 00:15:12.179 { 00:15:12.179 "name": "spare", 00:15:12.179 "uuid": "cb106fee-f602-558f-b8a7-baf9a5b2f61d", 00:15:12.179 "is_configured": true, 00:15:12.179 "data_offset": 2048, 00:15:12.179 "data_size": 63488 00:15:12.179 }, 00:15:12.179 { 00:15:12.179 "name": null, 00:15:12.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.179 "is_configured": false, 00:15:12.179 "data_offset": 2048, 00:15:12.179 "data_size": 63488 00:15:12.179 }, 00:15:12.179 { 00:15:12.179 "name": "BaseBdev3", 00:15:12.179 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:12.179 "is_configured": true, 00:15:12.179 "data_offset": 2048, 00:15:12.179 "data_size": 63488 00:15:12.179 }, 00:15:12.179 { 00:15:12.179 "name": "BaseBdev4", 00:15:12.179 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:12.179 "is_configured": true, 00:15:12.179 "data_offset": 2048, 00:15:12.179 "data_size": 63488 00:15:12.179 } 00:15:12.179 ] 00:15:12.179 }' 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.179 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.437 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.437 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.437 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.437 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.437 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.437 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.437 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.437 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.437 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.437 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.695 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.695 "name": "raid_bdev1", 00:15:12.695 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:12.695 "strip_size_kb": 0, 00:15:12.696 "state": "online", 00:15:12.696 "raid_level": "raid1", 00:15:12.696 "superblock": true, 00:15:12.696 "num_base_bdevs": 4, 00:15:12.696 "num_base_bdevs_discovered": 3, 00:15:12.696 "num_base_bdevs_operational": 3, 00:15:12.696 "base_bdevs_list": [ 00:15:12.696 { 00:15:12.696 "name": "spare", 00:15:12.696 "uuid": "cb106fee-f602-558f-b8a7-baf9a5b2f61d", 00:15:12.696 "is_configured": true, 00:15:12.696 "data_offset": 2048, 00:15:12.696 "data_size": 63488 00:15:12.696 }, 00:15:12.696 { 00:15:12.696 "name": null, 00:15:12.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.696 "is_configured": false, 00:15:12.696 "data_offset": 2048, 00:15:12.696 "data_size": 63488 00:15:12.696 }, 00:15:12.696 { 00:15:12.696 "name": "BaseBdev3", 00:15:12.696 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:12.696 "is_configured": true, 00:15:12.696 "data_offset": 2048, 00:15:12.696 "data_size": 63488 00:15:12.696 }, 00:15:12.696 { 00:15:12.696 "name": "BaseBdev4", 00:15:12.696 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:12.696 "is_configured": true, 00:15:12.696 "data_offset": 2048, 00:15:12.696 "data_size": 63488 00:15:12.696 } 00:15:12.696 ] 00:15:12.696 }' 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.696 [2024-11-15 10:43:33.770673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.696 "name": "raid_bdev1", 00:15:12.696 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:12.696 "strip_size_kb": 0, 00:15:12.696 "state": "online", 00:15:12.696 "raid_level": "raid1", 00:15:12.696 "superblock": true, 00:15:12.696 "num_base_bdevs": 4, 00:15:12.696 "num_base_bdevs_discovered": 2, 00:15:12.696 "num_base_bdevs_operational": 2, 00:15:12.696 "base_bdevs_list": [ 00:15:12.696 { 00:15:12.696 "name": null, 00:15:12.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.696 "is_configured": false, 00:15:12.696 "data_offset": 0, 00:15:12.696 "data_size": 63488 00:15:12.696 }, 00:15:12.696 { 00:15:12.696 "name": null, 00:15:12.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.696 "is_configured": false, 00:15:12.696 "data_offset": 2048, 00:15:12.696 "data_size": 63488 00:15:12.696 }, 00:15:12.696 { 00:15:12.696 "name": "BaseBdev3", 00:15:12.696 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:12.696 "is_configured": true, 00:15:12.696 "data_offset": 2048, 00:15:12.696 "data_size": 63488 00:15:12.696 }, 00:15:12.696 { 00:15:12.696 "name": "BaseBdev4", 00:15:12.696 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:12.696 "is_configured": true, 00:15:12.696 "data_offset": 2048, 00:15:12.696 "data_size": 63488 00:15:12.696 } 00:15:12.696 ] 00:15:12.696 }' 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.696 10:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.263 10:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:13.263 10:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.263 10:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.263 [2024-11-15 10:43:34.270931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.263 [2024-11-15 10:43:34.271303] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:13.263 [2024-11-15 10:43:34.271340] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:13.263 [2024-11-15 10:43:34.271395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.263 [2024-11-15 10:43:34.285131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:13.263 10:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.263 10:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:13.263 [2024-11-15 10:43:34.287603] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:14.237 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.237 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.237 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.237 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.237 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.237 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.237 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.237 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.237 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.237 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.237 "name": "raid_bdev1", 00:15:14.237 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:14.237 "strip_size_kb": 0, 00:15:14.237 "state": "online", 00:15:14.237 "raid_level": "raid1", 00:15:14.237 "superblock": true, 00:15:14.237 "num_base_bdevs": 4, 00:15:14.237 "num_base_bdevs_discovered": 3, 00:15:14.237 "num_base_bdevs_operational": 3, 00:15:14.237 "process": { 00:15:14.237 "type": "rebuild", 00:15:14.237 "target": "spare", 00:15:14.237 "progress": { 00:15:14.237 "blocks": 20480, 00:15:14.237 "percent": 32 00:15:14.237 } 00:15:14.237 }, 00:15:14.237 "base_bdevs_list": [ 00:15:14.237 { 00:15:14.237 "name": "spare", 00:15:14.237 "uuid": "cb106fee-f602-558f-b8a7-baf9a5b2f61d", 00:15:14.237 "is_configured": true, 00:15:14.237 "data_offset": 2048, 00:15:14.237 "data_size": 63488 00:15:14.237 }, 00:15:14.237 { 00:15:14.237 "name": null, 00:15:14.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.237 "is_configured": false, 00:15:14.237 "data_offset": 2048, 00:15:14.237 "data_size": 63488 00:15:14.237 }, 00:15:14.237 { 00:15:14.237 "name": "BaseBdev3", 00:15:14.237 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:14.237 "is_configured": true, 00:15:14.237 "data_offset": 2048, 00:15:14.237 "data_size": 63488 00:15:14.237 }, 00:15:14.237 { 00:15:14.237 "name": "BaseBdev4", 00:15:14.237 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:14.237 "is_configured": true, 00:15:14.237 "data_offset": 2048, 00:15:14.237 "data_size": 63488 00:15:14.237 } 00:15:14.237 ] 00:15:14.237 }' 00:15:14.237 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.496 [2024-11-15 10:43:35.457250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.496 [2024-11-15 10:43:35.496373] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:14.496 [2024-11-15 10:43:35.496620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.496 [2024-11-15 10:43:35.496649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.496 [2024-11-15 10:43:35.496664] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.496 "name": "raid_bdev1", 00:15:14.496 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:14.496 "strip_size_kb": 0, 00:15:14.496 "state": "online", 00:15:14.496 "raid_level": "raid1", 00:15:14.496 "superblock": true, 00:15:14.496 "num_base_bdevs": 4, 00:15:14.496 "num_base_bdevs_discovered": 2, 00:15:14.496 "num_base_bdevs_operational": 2, 00:15:14.496 "base_bdevs_list": [ 00:15:14.496 { 00:15:14.496 "name": null, 00:15:14.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.496 "is_configured": false, 00:15:14.496 "data_offset": 0, 00:15:14.496 "data_size": 63488 00:15:14.496 }, 00:15:14.496 { 00:15:14.496 "name": null, 00:15:14.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.496 "is_configured": false, 00:15:14.496 "data_offset": 2048, 00:15:14.496 "data_size": 63488 00:15:14.496 }, 00:15:14.496 { 00:15:14.496 "name": "BaseBdev3", 00:15:14.496 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:14.496 "is_configured": true, 00:15:14.496 "data_offset": 2048, 00:15:14.496 "data_size": 63488 00:15:14.496 }, 00:15:14.496 { 00:15:14.496 "name": "BaseBdev4", 00:15:14.496 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:14.496 "is_configured": true, 00:15:14.496 "data_offset": 2048, 00:15:14.496 "data_size": 63488 00:15:14.496 } 00:15:14.496 ] 00:15:14.496 }' 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.496 10:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.063 10:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:15.063 10:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.063 10:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.063 [2024-11-15 10:43:36.055497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:15.063 [2024-11-15 10:43:36.055738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.063 [2024-11-15 10:43:36.055821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:15.063 [2024-11-15 10:43:36.055848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.063 [2024-11-15 10:43:36.056491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.063 [2024-11-15 10:43:36.056560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:15.063 [2024-11-15 10:43:36.056700] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:15.063 [2024-11-15 10:43:36.056851] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:15.063 [2024-11-15 10:43:36.056873] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:15.063 [2024-11-15 10:43:36.056909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.063 [2024-11-15 10:43:36.071031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:15.063 spare 00:15:15.063 10:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.063 10:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:15.063 [2024-11-15 10:43:36.073646] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:15.998 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.998 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.998 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.998 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.998 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.998 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.998 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.998 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.998 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.998 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.998 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.998 "name": "raid_bdev1", 00:15:15.998 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:15.998 "strip_size_kb": 0, 00:15:15.998 "state": "online", 00:15:15.998 "raid_level": "raid1", 00:15:15.998 "superblock": true, 00:15:15.998 "num_base_bdevs": 4, 00:15:15.998 "num_base_bdevs_discovered": 3, 00:15:15.998 "num_base_bdevs_operational": 3, 00:15:15.998 "process": { 00:15:15.998 "type": "rebuild", 00:15:15.998 "target": "spare", 00:15:15.998 "progress": { 00:15:15.998 "blocks": 20480, 00:15:15.998 "percent": 32 00:15:15.998 } 00:15:15.998 }, 00:15:15.998 "base_bdevs_list": [ 00:15:15.998 { 00:15:15.998 "name": "spare", 00:15:15.998 "uuid": "cb106fee-f602-558f-b8a7-baf9a5b2f61d", 00:15:15.998 "is_configured": true, 00:15:15.998 "data_offset": 2048, 00:15:15.998 "data_size": 63488 00:15:15.998 }, 00:15:15.998 { 00:15:15.998 "name": null, 00:15:15.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.998 "is_configured": false, 00:15:15.998 "data_offset": 2048, 00:15:15.998 "data_size": 63488 00:15:15.998 }, 00:15:15.998 { 00:15:15.998 "name": "BaseBdev3", 00:15:15.998 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:15.998 "is_configured": true, 00:15:15.998 "data_offset": 2048, 00:15:15.998 "data_size": 63488 00:15:15.998 }, 00:15:15.998 { 00:15:15.998 "name": "BaseBdev4", 00:15:15.998 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:15.998 "is_configured": true, 00:15:15.998 "data_offset": 2048, 00:15:15.998 "data_size": 63488 00:15:15.998 } 00:15:15.998 ] 00:15:15.998 }' 00:15:15.998 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.258 [2024-11-15 10:43:37.227372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.258 [2024-11-15 10:43:37.282587] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:16.258 [2024-11-15 10:43:37.282840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.258 [2024-11-15 10:43:37.282874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.258 [2024-11-15 10:43:37.282887] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.258 "name": "raid_bdev1", 00:15:16.258 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:16.258 "strip_size_kb": 0, 00:15:16.258 "state": "online", 00:15:16.258 "raid_level": "raid1", 00:15:16.258 "superblock": true, 00:15:16.258 "num_base_bdevs": 4, 00:15:16.258 "num_base_bdevs_discovered": 2, 00:15:16.258 "num_base_bdevs_operational": 2, 00:15:16.258 "base_bdevs_list": [ 00:15:16.258 { 00:15:16.258 "name": null, 00:15:16.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.258 "is_configured": false, 00:15:16.258 "data_offset": 0, 00:15:16.258 "data_size": 63488 00:15:16.258 }, 00:15:16.258 { 00:15:16.258 "name": null, 00:15:16.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.258 "is_configured": false, 00:15:16.258 "data_offset": 2048, 00:15:16.258 "data_size": 63488 00:15:16.258 }, 00:15:16.258 { 00:15:16.258 "name": "BaseBdev3", 00:15:16.258 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:16.258 "is_configured": true, 00:15:16.258 "data_offset": 2048, 00:15:16.258 "data_size": 63488 00:15:16.258 }, 00:15:16.258 { 00:15:16.258 "name": "BaseBdev4", 00:15:16.258 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:16.258 "is_configured": true, 00:15:16.258 "data_offset": 2048, 00:15:16.258 "data_size": 63488 00:15:16.258 } 00:15:16.258 ] 00:15:16.258 }' 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.258 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.826 "name": "raid_bdev1", 00:15:16.826 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:16.826 "strip_size_kb": 0, 00:15:16.826 "state": "online", 00:15:16.826 "raid_level": "raid1", 00:15:16.826 "superblock": true, 00:15:16.826 "num_base_bdevs": 4, 00:15:16.826 "num_base_bdevs_discovered": 2, 00:15:16.826 "num_base_bdevs_operational": 2, 00:15:16.826 "base_bdevs_list": [ 00:15:16.826 { 00:15:16.826 "name": null, 00:15:16.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.826 "is_configured": false, 00:15:16.826 "data_offset": 0, 00:15:16.826 "data_size": 63488 00:15:16.826 }, 00:15:16.826 { 00:15:16.826 "name": null, 00:15:16.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.826 "is_configured": false, 00:15:16.826 "data_offset": 2048, 00:15:16.826 "data_size": 63488 00:15:16.826 }, 00:15:16.826 { 00:15:16.826 "name": "BaseBdev3", 00:15:16.826 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:16.826 "is_configured": true, 00:15:16.826 "data_offset": 2048, 00:15:16.826 "data_size": 63488 00:15:16.826 }, 00:15:16.826 { 00:15:16.826 "name": "BaseBdev4", 00:15:16.826 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:16.826 "is_configured": true, 00:15:16.826 "data_offset": 2048, 00:15:16.826 "data_size": 63488 00:15:16.826 } 00:15:16.826 ] 00:15:16.826 }' 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.826 [2024-11-15 10:43:37.973771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:16.826 [2024-11-15 10:43:37.973835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.826 [2024-11-15 10:43:37.973869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:16.826 [2024-11-15 10:43:37.973883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.826 [2024-11-15 10:43:37.974432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.826 [2024-11-15 10:43:37.974633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:16.826 [2024-11-15 10:43:37.974759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:16.826 [2024-11-15 10:43:37.974780] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:16.826 [2024-11-15 10:43:37.974794] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:16.826 [2024-11-15 10:43:37.974809] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:16.826 BaseBdev1 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.826 10:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:18.200 10:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:18.200 10:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.200 10:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.200 10:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.200 10:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.200 10:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.200 10:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.200 10:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.200 10:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.200 10:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.200 10:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.200 10:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.200 10:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.200 10:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.200 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.200 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.200 "name": "raid_bdev1", 00:15:18.200 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:18.200 "strip_size_kb": 0, 00:15:18.200 "state": "online", 00:15:18.200 "raid_level": "raid1", 00:15:18.200 "superblock": true, 00:15:18.200 "num_base_bdevs": 4, 00:15:18.200 "num_base_bdevs_discovered": 2, 00:15:18.200 "num_base_bdevs_operational": 2, 00:15:18.200 "base_bdevs_list": [ 00:15:18.200 { 00:15:18.200 "name": null, 00:15:18.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.200 "is_configured": false, 00:15:18.200 "data_offset": 0, 00:15:18.200 "data_size": 63488 00:15:18.200 }, 00:15:18.200 { 00:15:18.200 "name": null, 00:15:18.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.200 "is_configured": false, 00:15:18.200 "data_offset": 2048, 00:15:18.200 "data_size": 63488 00:15:18.200 }, 00:15:18.200 { 00:15:18.200 "name": "BaseBdev3", 00:15:18.200 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:18.200 "is_configured": true, 00:15:18.200 "data_offset": 2048, 00:15:18.200 "data_size": 63488 00:15:18.200 }, 00:15:18.200 { 00:15:18.200 "name": "BaseBdev4", 00:15:18.200 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:18.200 "is_configured": true, 00:15:18.201 "data_offset": 2048, 00:15:18.201 "data_size": 63488 00:15:18.201 } 00:15:18.201 ] 00:15:18.201 }' 00:15:18.201 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.201 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.459 "name": "raid_bdev1", 00:15:18.459 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:18.459 "strip_size_kb": 0, 00:15:18.459 "state": "online", 00:15:18.459 "raid_level": "raid1", 00:15:18.459 "superblock": true, 00:15:18.459 "num_base_bdevs": 4, 00:15:18.459 "num_base_bdevs_discovered": 2, 00:15:18.459 "num_base_bdevs_operational": 2, 00:15:18.459 "base_bdevs_list": [ 00:15:18.459 { 00:15:18.459 "name": null, 00:15:18.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.459 "is_configured": false, 00:15:18.459 "data_offset": 0, 00:15:18.459 "data_size": 63488 00:15:18.459 }, 00:15:18.459 { 00:15:18.459 "name": null, 00:15:18.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.459 "is_configured": false, 00:15:18.459 "data_offset": 2048, 00:15:18.459 "data_size": 63488 00:15:18.459 }, 00:15:18.459 { 00:15:18.459 "name": "BaseBdev3", 00:15:18.459 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:18.459 "is_configured": true, 00:15:18.459 "data_offset": 2048, 00:15:18.459 "data_size": 63488 00:15:18.459 }, 00:15:18.459 { 00:15:18.459 "name": "BaseBdev4", 00:15:18.459 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:18.459 "is_configured": true, 00:15:18.459 "data_offset": 2048, 00:15:18.459 "data_size": 63488 00:15:18.459 } 00:15:18.459 ] 00:15:18.459 }' 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.459 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:18.460 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:18.460 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:18.460 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:18.460 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.460 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:18.460 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.460 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:18.460 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.460 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.718 [2024-11-15 10:43:39.622507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.718 [2024-11-15 10:43:39.622831] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:18.718 [2024-11-15 10:43:39.623006] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:18.718 request: 00:15:18.718 { 00:15:18.718 "base_bdev": "BaseBdev1", 00:15:18.718 "raid_bdev": "raid_bdev1", 00:15:18.718 "method": "bdev_raid_add_base_bdev", 00:15:18.718 "req_id": 1 00:15:18.718 } 00:15:18.718 Got JSON-RPC error response 00:15:18.718 response: 00:15:18.718 { 00:15:18.718 "code": -22, 00:15:18.718 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:18.718 } 00:15:18.718 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:18.718 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:18.718 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:18.718 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:18.718 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:18.718 10:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.654 "name": "raid_bdev1", 00:15:19.654 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:19.654 "strip_size_kb": 0, 00:15:19.654 "state": "online", 00:15:19.654 "raid_level": "raid1", 00:15:19.654 "superblock": true, 00:15:19.654 "num_base_bdevs": 4, 00:15:19.654 "num_base_bdevs_discovered": 2, 00:15:19.654 "num_base_bdevs_operational": 2, 00:15:19.654 "base_bdevs_list": [ 00:15:19.654 { 00:15:19.654 "name": null, 00:15:19.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.654 "is_configured": false, 00:15:19.654 "data_offset": 0, 00:15:19.654 "data_size": 63488 00:15:19.654 }, 00:15:19.654 { 00:15:19.654 "name": null, 00:15:19.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.654 "is_configured": false, 00:15:19.654 "data_offset": 2048, 00:15:19.654 "data_size": 63488 00:15:19.654 }, 00:15:19.654 { 00:15:19.654 "name": "BaseBdev3", 00:15:19.654 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:19.654 "is_configured": true, 00:15:19.654 "data_offset": 2048, 00:15:19.654 "data_size": 63488 00:15:19.654 }, 00:15:19.654 { 00:15:19.654 "name": "BaseBdev4", 00:15:19.654 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:19.654 "is_configured": true, 00:15:19.654 "data_offset": 2048, 00:15:19.654 "data_size": 63488 00:15:19.654 } 00:15:19.654 ] 00:15:19.654 }' 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.654 10:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.221 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.221 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.221 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.221 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.221 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.221 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.221 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.221 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.221 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.221 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.221 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.221 "name": "raid_bdev1", 00:15:20.221 "uuid": "91ad6d5f-1645-40d6-84dc-78478ea68ea6", 00:15:20.221 "strip_size_kb": 0, 00:15:20.221 "state": "online", 00:15:20.221 "raid_level": "raid1", 00:15:20.221 "superblock": true, 00:15:20.221 "num_base_bdevs": 4, 00:15:20.221 "num_base_bdevs_discovered": 2, 00:15:20.221 "num_base_bdevs_operational": 2, 00:15:20.221 "base_bdevs_list": [ 00:15:20.221 { 00:15:20.221 "name": null, 00:15:20.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.221 "is_configured": false, 00:15:20.221 "data_offset": 0, 00:15:20.221 "data_size": 63488 00:15:20.221 }, 00:15:20.221 { 00:15:20.221 "name": null, 00:15:20.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.222 "is_configured": false, 00:15:20.222 "data_offset": 2048, 00:15:20.222 "data_size": 63488 00:15:20.222 }, 00:15:20.222 { 00:15:20.222 "name": "BaseBdev3", 00:15:20.222 "uuid": "f41d4943-a765-5ec4-bcf8-50df2f80f19e", 00:15:20.222 "is_configured": true, 00:15:20.222 "data_offset": 2048, 00:15:20.222 "data_size": 63488 00:15:20.222 }, 00:15:20.222 { 00:15:20.222 "name": "BaseBdev4", 00:15:20.222 "uuid": "1f14dfbe-f8a9-5693-a91c-e04058c8bf63", 00:15:20.222 "is_configured": true, 00:15:20.222 "data_offset": 2048, 00:15:20.222 "data_size": 63488 00:15:20.222 } 00:15:20.222 ] 00:15:20.222 }' 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79437 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79437 ']' 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79437 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79437 00:15:20.222 killing process with pid 79437 00:15:20.222 Received shutdown signal, test time was about 20.275405 seconds 00:15:20.222 00:15:20.222 Latency(us) 00:15:20.222 [2024-11-15T10:43:41.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.222 [2024-11-15T10:43:41.384Z] =================================================================================================================== 00:15:20.222 [2024-11-15T10:43:41.384Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79437' 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79437 00:15:20.222 10:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79437 00:15:20.222 [2024-11-15 10:43:41.340423] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.222 [2024-11-15 10:43:41.340602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.222 [2024-11-15 10:43:41.340708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.222 [2024-11-15 10:43:41.340738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:20.788 [2024-11-15 10:43:41.715529] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.724 10:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:21.724 00:15:21.724 real 0m23.902s 00:15:21.724 user 0m32.203s 00:15:21.724 sys 0m2.374s 00:15:21.724 10:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.724 10:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.724 ************************************ 00:15:21.724 END TEST raid_rebuild_test_sb_io 00:15:21.724 ************************************ 00:15:21.724 10:43:42 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:21.724 10:43:42 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:21.724 10:43:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:21.724 10:43:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.724 10:43:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.724 ************************************ 00:15:21.724 START TEST raid5f_state_function_test 00:15:21.724 ************************************ 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:21.724 Process raid pid: 80195 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80195 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80195' 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80195 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80195 ']' 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.724 10:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.982 [2024-11-15 10:43:42.967634] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:15:21.982 [2024-11-15 10:43:42.968854] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.241 [2024-11-15 10:43:43.164728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.241 [2024-11-15 10:43:43.324727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.499 [2024-11-15 10:43:43.536749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.499 [2024-11-15 10:43:43.537021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.065 [2024-11-15 10:43:44.032805] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.065 [2024-11-15 10:43:44.033021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.065 [2024-11-15 10:43:44.033051] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.065 [2024-11-15 10:43:44.033075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.065 [2024-11-15 10:43:44.033093] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.065 [2024-11-15 10:43:44.033109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.065 "name": "Existed_Raid", 00:15:23.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.065 "strip_size_kb": 64, 00:15:23.065 "state": "configuring", 00:15:23.065 "raid_level": "raid5f", 00:15:23.065 "superblock": false, 00:15:23.065 "num_base_bdevs": 3, 00:15:23.065 "num_base_bdevs_discovered": 0, 00:15:23.065 "num_base_bdevs_operational": 3, 00:15:23.065 "base_bdevs_list": [ 00:15:23.065 { 00:15:23.065 "name": "BaseBdev1", 00:15:23.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.065 "is_configured": false, 00:15:23.065 "data_offset": 0, 00:15:23.065 "data_size": 0 00:15:23.065 }, 00:15:23.065 { 00:15:23.065 "name": "BaseBdev2", 00:15:23.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.065 "is_configured": false, 00:15:23.065 "data_offset": 0, 00:15:23.065 "data_size": 0 00:15:23.065 }, 00:15:23.065 { 00:15:23.065 "name": "BaseBdev3", 00:15:23.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.065 "is_configured": false, 00:15:23.065 "data_offset": 0, 00:15:23.065 "data_size": 0 00:15:23.065 } 00:15:23.065 ] 00:15:23.065 }' 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.065 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.325 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.325 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.325 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.586 [2024-11-15 10:43:44.484888] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.586 [2024-11-15 10:43:44.484933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:23.586 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.586 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:23.586 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.586 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.586 [2024-11-15 10:43:44.492862] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.586 [2024-11-15 10:43:44.493053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.586 [2024-11-15 10:43:44.493190] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.586 [2024-11-15 10:43:44.493337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.586 [2024-11-15 10:43:44.493462] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.586 [2024-11-15 10:43:44.493634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.586 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.586 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:23.586 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.587 [2024-11-15 10:43:44.541452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.587 BaseBdev1 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.587 [ 00:15:23.587 { 00:15:23.587 "name": "BaseBdev1", 00:15:23.587 "aliases": [ 00:15:23.587 "275e8f6a-abea-46e7-a562-c2092b6aee15" 00:15:23.587 ], 00:15:23.587 "product_name": "Malloc disk", 00:15:23.587 "block_size": 512, 00:15:23.587 "num_blocks": 65536, 00:15:23.587 "uuid": "275e8f6a-abea-46e7-a562-c2092b6aee15", 00:15:23.587 "assigned_rate_limits": { 00:15:23.587 "rw_ios_per_sec": 0, 00:15:23.587 "rw_mbytes_per_sec": 0, 00:15:23.587 "r_mbytes_per_sec": 0, 00:15:23.587 "w_mbytes_per_sec": 0 00:15:23.587 }, 00:15:23.587 "claimed": true, 00:15:23.587 "claim_type": "exclusive_write", 00:15:23.587 "zoned": false, 00:15:23.587 "supported_io_types": { 00:15:23.587 "read": true, 00:15:23.587 "write": true, 00:15:23.587 "unmap": true, 00:15:23.587 "flush": true, 00:15:23.587 "reset": true, 00:15:23.587 "nvme_admin": false, 00:15:23.587 "nvme_io": false, 00:15:23.587 "nvme_io_md": false, 00:15:23.587 "write_zeroes": true, 00:15:23.587 "zcopy": true, 00:15:23.587 "get_zone_info": false, 00:15:23.587 "zone_management": false, 00:15:23.587 "zone_append": false, 00:15:23.587 "compare": false, 00:15:23.587 "compare_and_write": false, 00:15:23.587 "abort": true, 00:15:23.587 "seek_hole": false, 00:15:23.587 "seek_data": false, 00:15:23.587 "copy": true, 00:15:23.587 "nvme_iov_md": false 00:15:23.587 }, 00:15:23.587 "memory_domains": [ 00:15:23.587 { 00:15:23.587 "dma_device_id": "system", 00:15:23.587 "dma_device_type": 1 00:15:23.587 }, 00:15:23.587 { 00:15:23.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.587 "dma_device_type": 2 00:15:23.587 } 00:15:23.587 ], 00:15:23.587 "driver_specific": {} 00:15:23.587 } 00:15:23.587 ] 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.587 "name": "Existed_Raid", 00:15:23.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.587 "strip_size_kb": 64, 00:15:23.587 "state": "configuring", 00:15:23.587 "raid_level": "raid5f", 00:15:23.587 "superblock": false, 00:15:23.587 "num_base_bdevs": 3, 00:15:23.587 "num_base_bdevs_discovered": 1, 00:15:23.587 "num_base_bdevs_operational": 3, 00:15:23.587 "base_bdevs_list": [ 00:15:23.587 { 00:15:23.587 "name": "BaseBdev1", 00:15:23.587 "uuid": "275e8f6a-abea-46e7-a562-c2092b6aee15", 00:15:23.587 "is_configured": true, 00:15:23.587 "data_offset": 0, 00:15:23.587 "data_size": 65536 00:15:23.587 }, 00:15:23.587 { 00:15:23.587 "name": "BaseBdev2", 00:15:23.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.587 "is_configured": false, 00:15:23.587 "data_offset": 0, 00:15:23.587 "data_size": 0 00:15:23.587 }, 00:15:23.587 { 00:15:23.587 "name": "BaseBdev3", 00:15:23.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.587 "is_configured": false, 00:15:23.587 "data_offset": 0, 00:15:23.587 "data_size": 0 00:15:23.587 } 00:15:23.587 ] 00:15:23.587 }' 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.587 10:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.153 [2024-11-15 10:43:45.105647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.153 [2024-11-15 10:43:45.105711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.153 [2024-11-15 10:43:45.113688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.153 [2024-11-15 10:43:45.116237] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.153 [2024-11-15 10:43:45.116425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.153 [2024-11-15 10:43:45.116612] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.153 [2024-11-15 10:43:45.116785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.153 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.153 "name": "Existed_Raid", 00:15:24.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.153 "strip_size_kb": 64, 00:15:24.153 "state": "configuring", 00:15:24.153 "raid_level": "raid5f", 00:15:24.153 "superblock": false, 00:15:24.153 "num_base_bdevs": 3, 00:15:24.153 "num_base_bdevs_discovered": 1, 00:15:24.153 "num_base_bdevs_operational": 3, 00:15:24.153 "base_bdevs_list": [ 00:15:24.153 { 00:15:24.153 "name": "BaseBdev1", 00:15:24.153 "uuid": "275e8f6a-abea-46e7-a562-c2092b6aee15", 00:15:24.153 "is_configured": true, 00:15:24.153 "data_offset": 0, 00:15:24.153 "data_size": 65536 00:15:24.153 }, 00:15:24.153 { 00:15:24.153 "name": "BaseBdev2", 00:15:24.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.154 "is_configured": false, 00:15:24.154 "data_offset": 0, 00:15:24.154 "data_size": 0 00:15:24.154 }, 00:15:24.154 { 00:15:24.154 "name": "BaseBdev3", 00:15:24.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.154 "is_configured": false, 00:15:24.154 "data_offset": 0, 00:15:24.154 "data_size": 0 00:15:24.154 } 00:15:24.154 ] 00:15:24.154 }' 00:15:24.154 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.154 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.719 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:24.719 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.719 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.719 [2024-11-15 10:43:45.676319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.720 BaseBdev2 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.720 [ 00:15:24.720 { 00:15:24.720 "name": "BaseBdev2", 00:15:24.720 "aliases": [ 00:15:24.720 "7d903d85-a105-4323-9e25-4c5704dccf6b" 00:15:24.720 ], 00:15:24.720 "product_name": "Malloc disk", 00:15:24.720 "block_size": 512, 00:15:24.720 "num_blocks": 65536, 00:15:24.720 "uuid": "7d903d85-a105-4323-9e25-4c5704dccf6b", 00:15:24.720 "assigned_rate_limits": { 00:15:24.720 "rw_ios_per_sec": 0, 00:15:24.720 "rw_mbytes_per_sec": 0, 00:15:24.720 "r_mbytes_per_sec": 0, 00:15:24.720 "w_mbytes_per_sec": 0 00:15:24.720 }, 00:15:24.720 "claimed": true, 00:15:24.720 "claim_type": "exclusive_write", 00:15:24.720 "zoned": false, 00:15:24.720 "supported_io_types": { 00:15:24.720 "read": true, 00:15:24.720 "write": true, 00:15:24.720 "unmap": true, 00:15:24.720 "flush": true, 00:15:24.720 "reset": true, 00:15:24.720 "nvme_admin": false, 00:15:24.720 "nvme_io": false, 00:15:24.720 "nvme_io_md": false, 00:15:24.720 "write_zeroes": true, 00:15:24.720 "zcopy": true, 00:15:24.720 "get_zone_info": false, 00:15:24.720 "zone_management": false, 00:15:24.720 "zone_append": false, 00:15:24.720 "compare": false, 00:15:24.720 "compare_and_write": false, 00:15:24.720 "abort": true, 00:15:24.720 "seek_hole": false, 00:15:24.720 "seek_data": false, 00:15:24.720 "copy": true, 00:15:24.720 "nvme_iov_md": false 00:15:24.720 }, 00:15:24.720 "memory_domains": [ 00:15:24.720 { 00:15:24.720 "dma_device_id": "system", 00:15:24.720 "dma_device_type": 1 00:15:24.720 }, 00:15:24.720 { 00:15:24.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.720 "dma_device_type": 2 00:15:24.720 } 00:15:24.720 ], 00:15:24.720 "driver_specific": {} 00:15:24.720 } 00:15:24.720 ] 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.720 "name": "Existed_Raid", 00:15:24.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.720 "strip_size_kb": 64, 00:15:24.720 "state": "configuring", 00:15:24.720 "raid_level": "raid5f", 00:15:24.720 "superblock": false, 00:15:24.720 "num_base_bdevs": 3, 00:15:24.720 "num_base_bdevs_discovered": 2, 00:15:24.720 "num_base_bdevs_operational": 3, 00:15:24.720 "base_bdevs_list": [ 00:15:24.720 { 00:15:24.720 "name": "BaseBdev1", 00:15:24.720 "uuid": "275e8f6a-abea-46e7-a562-c2092b6aee15", 00:15:24.720 "is_configured": true, 00:15:24.720 "data_offset": 0, 00:15:24.720 "data_size": 65536 00:15:24.720 }, 00:15:24.720 { 00:15:24.720 "name": "BaseBdev2", 00:15:24.720 "uuid": "7d903d85-a105-4323-9e25-4c5704dccf6b", 00:15:24.720 "is_configured": true, 00:15:24.720 "data_offset": 0, 00:15:24.720 "data_size": 65536 00:15:24.720 }, 00:15:24.720 { 00:15:24.720 "name": "BaseBdev3", 00:15:24.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.720 "is_configured": false, 00:15:24.720 "data_offset": 0, 00:15:24.720 "data_size": 0 00:15:24.720 } 00:15:24.720 ] 00:15:24.720 }' 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.720 10:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.287 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:25.287 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.287 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.287 [2024-11-15 10:43:46.272468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.287 [2024-11-15 10:43:46.272567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:25.287 [2024-11-15 10:43:46.272600] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:25.287 [2024-11-15 10:43:46.272987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:25.287 BaseBdev3 00:15:25.287 [2024-11-15 10:43:46.278363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:25.287 [2024-11-15 10:43:46.278392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:25.287 [2024-11-15 10:43:46.278776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.287 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.287 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:25.287 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:25.287 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.287 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.288 [ 00:15:25.288 { 00:15:25.288 "name": "BaseBdev3", 00:15:25.288 "aliases": [ 00:15:25.288 "cbaa96fd-86f0-4dd4-aea7-43433ca2d56a" 00:15:25.288 ], 00:15:25.288 "product_name": "Malloc disk", 00:15:25.288 "block_size": 512, 00:15:25.288 "num_blocks": 65536, 00:15:25.288 "uuid": "cbaa96fd-86f0-4dd4-aea7-43433ca2d56a", 00:15:25.288 "assigned_rate_limits": { 00:15:25.288 "rw_ios_per_sec": 0, 00:15:25.288 "rw_mbytes_per_sec": 0, 00:15:25.288 "r_mbytes_per_sec": 0, 00:15:25.288 "w_mbytes_per_sec": 0 00:15:25.288 }, 00:15:25.288 "claimed": true, 00:15:25.288 "claim_type": "exclusive_write", 00:15:25.288 "zoned": false, 00:15:25.288 "supported_io_types": { 00:15:25.288 "read": true, 00:15:25.288 "write": true, 00:15:25.288 "unmap": true, 00:15:25.288 "flush": true, 00:15:25.288 "reset": true, 00:15:25.288 "nvme_admin": false, 00:15:25.288 "nvme_io": false, 00:15:25.288 "nvme_io_md": false, 00:15:25.288 "write_zeroes": true, 00:15:25.288 "zcopy": true, 00:15:25.288 "get_zone_info": false, 00:15:25.288 "zone_management": false, 00:15:25.288 "zone_append": false, 00:15:25.288 "compare": false, 00:15:25.288 "compare_and_write": false, 00:15:25.288 "abort": true, 00:15:25.288 "seek_hole": false, 00:15:25.288 "seek_data": false, 00:15:25.288 "copy": true, 00:15:25.288 "nvme_iov_md": false 00:15:25.288 }, 00:15:25.288 "memory_domains": [ 00:15:25.288 { 00:15:25.288 "dma_device_id": "system", 00:15:25.288 "dma_device_type": 1 00:15:25.288 }, 00:15:25.288 { 00:15:25.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.288 "dma_device_type": 2 00:15:25.288 } 00:15:25.288 ], 00:15:25.288 "driver_specific": {} 00:15:25.288 } 00:15:25.288 ] 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.288 "name": "Existed_Raid", 00:15:25.288 "uuid": "f259bee2-244d-4c8a-a003-2e2917630735", 00:15:25.288 "strip_size_kb": 64, 00:15:25.288 "state": "online", 00:15:25.288 "raid_level": "raid5f", 00:15:25.288 "superblock": false, 00:15:25.288 "num_base_bdevs": 3, 00:15:25.288 "num_base_bdevs_discovered": 3, 00:15:25.288 "num_base_bdevs_operational": 3, 00:15:25.288 "base_bdevs_list": [ 00:15:25.288 { 00:15:25.288 "name": "BaseBdev1", 00:15:25.288 "uuid": "275e8f6a-abea-46e7-a562-c2092b6aee15", 00:15:25.288 "is_configured": true, 00:15:25.288 "data_offset": 0, 00:15:25.288 "data_size": 65536 00:15:25.288 }, 00:15:25.288 { 00:15:25.288 "name": "BaseBdev2", 00:15:25.288 "uuid": "7d903d85-a105-4323-9e25-4c5704dccf6b", 00:15:25.288 "is_configured": true, 00:15:25.288 "data_offset": 0, 00:15:25.288 "data_size": 65536 00:15:25.288 }, 00:15:25.288 { 00:15:25.288 "name": "BaseBdev3", 00:15:25.288 "uuid": "cbaa96fd-86f0-4dd4-aea7-43433ca2d56a", 00:15:25.288 "is_configured": true, 00:15:25.288 "data_offset": 0, 00:15:25.288 "data_size": 65536 00:15:25.288 } 00:15:25.288 ] 00:15:25.288 }' 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.288 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.854 [2024-11-15 10:43:46.809062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:25.854 "name": "Existed_Raid", 00:15:25.854 "aliases": [ 00:15:25.854 "f259bee2-244d-4c8a-a003-2e2917630735" 00:15:25.854 ], 00:15:25.854 "product_name": "Raid Volume", 00:15:25.854 "block_size": 512, 00:15:25.854 "num_blocks": 131072, 00:15:25.854 "uuid": "f259bee2-244d-4c8a-a003-2e2917630735", 00:15:25.854 "assigned_rate_limits": { 00:15:25.854 "rw_ios_per_sec": 0, 00:15:25.854 "rw_mbytes_per_sec": 0, 00:15:25.854 "r_mbytes_per_sec": 0, 00:15:25.854 "w_mbytes_per_sec": 0 00:15:25.854 }, 00:15:25.854 "claimed": false, 00:15:25.854 "zoned": false, 00:15:25.854 "supported_io_types": { 00:15:25.854 "read": true, 00:15:25.854 "write": true, 00:15:25.854 "unmap": false, 00:15:25.854 "flush": false, 00:15:25.854 "reset": true, 00:15:25.854 "nvme_admin": false, 00:15:25.854 "nvme_io": false, 00:15:25.854 "nvme_io_md": false, 00:15:25.854 "write_zeroes": true, 00:15:25.854 "zcopy": false, 00:15:25.854 "get_zone_info": false, 00:15:25.854 "zone_management": false, 00:15:25.854 "zone_append": false, 00:15:25.854 "compare": false, 00:15:25.854 "compare_and_write": false, 00:15:25.854 "abort": false, 00:15:25.854 "seek_hole": false, 00:15:25.854 "seek_data": false, 00:15:25.854 "copy": false, 00:15:25.854 "nvme_iov_md": false 00:15:25.854 }, 00:15:25.854 "driver_specific": { 00:15:25.854 "raid": { 00:15:25.854 "uuid": "f259bee2-244d-4c8a-a003-2e2917630735", 00:15:25.854 "strip_size_kb": 64, 00:15:25.854 "state": "online", 00:15:25.854 "raid_level": "raid5f", 00:15:25.854 "superblock": false, 00:15:25.854 "num_base_bdevs": 3, 00:15:25.854 "num_base_bdevs_discovered": 3, 00:15:25.854 "num_base_bdevs_operational": 3, 00:15:25.854 "base_bdevs_list": [ 00:15:25.854 { 00:15:25.854 "name": "BaseBdev1", 00:15:25.854 "uuid": "275e8f6a-abea-46e7-a562-c2092b6aee15", 00:15:25.854 "is_configured": true, 00:15:25.854 "data_offset": 0, 00:15:25.854 "data_size": 65536 00:15:25.854 }, 00:15:25.854 { 00:15:25.854 "name": "BaseBdev2", 00:15:25.854 "uuid": "7d903d85-a105-4323-9e25-4c5704dccf6b", 00:15:25.854 "is_configured": true, 00:15:25.854 "data_offset": 0, 00:15:25.854 "data_size": 65536 00:15:25.854 }, 00:15:25.854 { 00:15:25.854 "name": "BaseBdev3", 00:15:25.854 "uuid": "cbaa96fd-86f0-4dd4-aea7-43433ca2d56a", 00:15:25.854 "is_configured": true, 00:15:25.854 "data_offset": 0, 00:15:25.854 "data_size": 65536 00:15:25.854 } 00:15:25.854 ] 00:15:25.854 } 00:15:25.854 } 00:15:25.854 }' 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:25.854 BaseBdev2 00:15:25.854 BaseBdev3' 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.854 10:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.854 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.854 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.854 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.854 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:25.854 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.854 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.854 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.112 [2024-11-15 10:43:47.112907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.112 "name": "Existed_Raid", 00:15:26.112 "uuid": "f259bee2-244d-4c8a-a003-2e2917630735", 00:15:26.112 "strip_size_kb": 64, 00:15:26.112 "state": "online", 00:15:26.112 "raid_level": "raid5f", 00:15:26.112 "superblock": false, 00:15:26.112 "num_base_bdevs": 3, 00:15:26.112 "num_base_bdevs_discovered": 2, 00:15:26.112 "num_base_bdevs_operational": 2, 00:15:26.112 "base_bdevs_list": [ 00:15:26.112 { 00:15:26.112 "name": null, 00:15:26.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.112 "is_configured": false, 00:15:26.112 "data_offset": 0, 00:15:26.112 "data_size": 65536 00:15:26.112 }, 00:15:26.112 { 00:15:26.112 "name": "BaseBdev2", 00:15:26.112 "uuid": "7d903d85-a105-4323-9e25-4c5704dccf6b", 00:15:26.112 "is_configured": true, 00:15:26.112 "data_offset": 0, 00:15:26.112 "data_size": 65536 00:15:26.112 }, 00:15:26.112 { 00:15:26.112 "name": "BaseBdev3", 00:15:26.112 "uuid": "cbaa96fd-86f0-4dd4-aea7-43433ca2d56a", 00:15:26.112 "is_configured": true, 00:15:26.112 "data_offset": 0, 00:15:26.112 "data_size": 65536 00:15:26.112 } 00:15:26.112 ] 00:15:26.112 }' 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.112 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.678 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:26.678 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:26.678 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.678 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:26.678 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.678 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.678 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.678 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:26.678 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:26.678 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:26.678 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.678 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.678 [2024-11-15 10:43:47.752117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:26.678 [2024-11-15 10:43:47.753472] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.937 [2024-11-15 10:43:47.837903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.937 [2024-11-15 10:43:47.898012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:26.937 [2024-11-15 10:43:47.898222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:26.937 10:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.937 BaseBdev2 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.937 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.196 [ 00:15:27.196 { 00:15:27.196 "name": "BaseBdev2", 00:15:27.196 "aliases": [ 00:15:27.196 "69fb6752-5c20-4ae7-803c-c415359ed8b9" 00:15:27.196 ], 00:15:27.196 "product_name": "Malloc disk", 00:15:27.196 "block_size": 512, 00:15:27.196 "num_blocks": 65536, 00:15:27.196 "uuid": "69fb6752-5c20-4ae7-803c-c415359ed8b9", 00:15:27.196 "assigned_rate_limits": { 00:15:27.196 "rw_ios_per_sec": 0, 00:15:27.196 "rw_mbytes_per_sec": 0, 00:15:27.196 "r_mbytes_per_sec": 0, 00:15:27.196 "w_mbytes_per_sec": 0 00:15:27.196 }, 00:15:27.196 "claimed": false, 00:15:27.196 "zoned": false, 00:15:27.196 "supported_io_types": { 00:15:27.196 "read": true, 00:15:27.196 "write": true, 00:15:27.196 "unmap": true, 00:15:27.196 "flush": true, 00:15:27.196 "reset": true, 00:15:27.196 "nvme_admin": false, 00:15:27.196 "nvme_io": false, 00:15:27.196 "nvme_io_md": false, 00:15:27.196 "write_zeroes": true, 00:15:27.196 "zcopy": true, 00:15:27.196 "get_zone_info": false, 00:15:27.196 "zone_management": false, 00:15:27.196 "zone_append": false, 00:15:27.196 "compare": false, 00:15:27.196 "compare_and_write": false, 00:15:27.196 "abort": true, 00:15:27.196 "seek_hole": false, 00:15:27.196 "seek_data": false, 00:15:27.196 "copy": true, 00:15:27.196 "nvme_iov_md": false 00:15:27.196 }, 00:15:27.196 "memory_domains": [ 00:15:27.196 { 00:15:27.196 "dma_device_id": "system", 00:15:27.196 "dma_device_type": 1 00:15:27.196 }, 00:15:27.196 { 00:15:27.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.196 "dma_device_type": 2 00:15:27.196 } 00:15:27.196 ], 00:15:27.197 "driver_specific": {} 00:15:27.197 } 00:15:27.197 ] 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.197 BaseBdev3 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.197 [ 00:15:27.197 { 00:15:27.197 "name": "BaseBdev3", 00:15:27.197 "aliases": [ 00:15:27.197 "cd569150-cda1-4888-8883-c7fbc5d39c6a" 00:15:27.197 ], 00:15:27.197 "product_name": "Malloc disk", 00:15:27.197 "block_size": 512, 00:15:27.197 "num_blocks": 65536, 00:15:27.197 "uuid": "cd569150-cda1-4888-8883-c7fbc5d39c6a", 00:15:27.197 "assigned_rate_limits": { 00:15:27.197 "rw_ios_per_sec": 0, 00:15:27.197 "rw_mbytes_per_sec": 0, 00:15:27.197 "r_mbytes_per_sec": 0, 00:15:27.197 "w_mbytes_per_sec": 0 00:15:27.197 }, 00:15:27.197 "claimed": false, 00:15:27.197 "zoned": false, 00:15:27.197 "supported_io_types": { 00:15:27.197 "read": true, 00:15:27.197 "write": true, 00:15:27.197 "unmap": true, 00:15:27.197 "flush": true, 00:15:27.197 "reset": true, 00:15:27.197 "nvme_admin": false, 00:15:27.197 "nvme_io": false, 00:15:27.197 "nvme_io_md": false, 00:15:27.197 "write_zeroes": true, 00:15:27.197 "zcopy": true, 00:15:27.197 "get_zone_info": false, 00:15:27.197 "zone_management": false, 00:15:27.197 "zone_append": false, 00:15:27.197 "compare": false, 00:15:27.197 "compare_and_write": false, 00:15:27.197 "abort": true, 00:15:27.197 "seek_hole": false, 00:15:27.197 "seek_data": false, 00:15:27.197 "copy": true, 00:15:27.197 "nvme_iov_md": false 00:15:27.197 }, 00:15:27.197 "memory_domains": [ 00:15:27.197 { 00:15:27.197 "dma_device_id": "system", 00:15:27.197 "dma_device_type": 1 00:15:27.197 }, 00:15:27.197 { 00:15:27.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.197 "dma_device_type": 2 00:15:27.197 } 00:15:27.197 ], 00:15:27.197 "driver_specific": {} 00:15:27.197 } 00:15:27.197 ] 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.197 [2024-11-15 10:43:48.190983] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.197 [2024-11-15 10:43:48.191175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.197 [2024-11-15 10:43:48.191324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.197 [2024-11-15 10:43:48.193887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.197 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.197 "name": "Existed_Raid", 00:15:27.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.197 "strip_size_kb": 64, 00:15:27.197 "state": "configuring", 00:15:27.197 "raid_level": "raid5f", 00:15:27.197 "superblock": false, 00:15:27.197 "num_base_bdevs": 3, 00:15:27.197 "num_base_bdevs_discovered": 2, 00:15:27.197 "num_base_bdevs_operational": 3, 00:15:27.197 "base_bdevs_list": [ 00:15:27.197 { 00:15:27.197 "name": "BaseBdev1", 00:15:27.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.197 "is_configured": false, 00:15:27.197 "data_offset": 0, 00:15:27.198 "data_size": 0 00:15:27.198 }, 00:15:27.198 { 00:15:27.198 "name": "BaseBdev2", 00:15:27.198 "uuid": "69fb6752-5c20-4ae7-803c-c415359ed8b9", 00:15:27.198 "is_configured": true, 00:15:27.198 "data_offset": 0, 00:15:27.198 "data_size": 65536 00:15:27.198 }, 00:15:27.198 { 00:15:27.198 "name": "BaseBdev3", 00:15:27.198 "uuid": "cd569150-cda1-4888-8883-c7fbc5d39c6a", 00:15:27.198 "is_configured": true, 00:15:27.198 "data_offset": 0, 00:15:27.198 "data_size": 65536 00:15:27.198 } 00:15:27.198 ] 00:15:27.198 }' 00:15:27.198 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.198 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.765 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.766 [2024-11-15 10:43:48.719143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.766 "name": "Existed_Raid", 00:15:27.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.766 "strip_size_kb": 64, 00:15:27.766 "state": "configuring", 00:15:27.766 "raid_level": "raid5f", 00:15:27.766 "superblock": false, 00:15:27.766 "num_base_bdevs": 3, 00:15:27.766 "num_base_bdevs_discovered": 1, 00:15:27.766 "num_base_bdevs_operational": 3, 00:15:27.766 "base_bdevs_list": [ 00:15:27.766 { 00:15:27.766 "name": "BaseBdev1", 00:15:27.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.766 "is_configured": false, 00:15:27.766 "data_offset": 0, 00:15:27.766 "data_size": 0 00:15:27.766 }, 00:15:27.766 { 00:15:27.766 "name": null, 00:15:27.766 "uuid": "69fb6752-5c20-4ae7-803c-c415359ed8b9", 00:15:27.766 "is_configured": false, 00:15:27.766 "data_offset": 0, 00:15:27.766 "data_size": 65536 00:15:27.766 }, 00:15:27.766 { 00:15:27.766 "name": "BaseBdev3", 00:15:27.766 "uuid": "cd569150-cda1-4888-8883-c7fbc5d39c6a", 00:15:27.766 "is_configured": true, 00:15:27.766 "data_offset": 0, 00:15:27.766 "data_size": 65536 00:15:27.766 } 00:15:27.766 ] 00:15:27.766 }' 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.766 10:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.333 [2024-11-15 10:43:49.320985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.333 BaseBdev1 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.333 [ 00:15:28.333 { 00:15:28.333 "name": "BaseBdev1", 00:15:28.333 "aliases": [ 00:15:28.333 "cbd5dd40-2774-40bf-aa9d-c3a862f47ae0" 00:15:28.333 ], 00:15:28.333 "product_name": "Malloc disk", 00:15:28.333 "block_size": 512, 00:15:28.333 "num_blocks": 65536, 00:15:28.333 "uuid": "cbd5dd40-2774-40bf-aa9d-c3a862f47ae0", 00:15:28.333 "assigned_rate_limits": { 00:15:28.333 "rw_ios_per_sec": 0, 00:15:28.333 "rw_mbytes_per_sec": 0, 00:15:28.333 "r_mbytes_per_sec": 0, 00:15:28.333 "w_mbytes_per_sec": 0 00:15:28.333 }, 00:15:28.333 "claimed": true, 00:15:28.333 "claim_type": "exclusive_write", 00:15:28.333 "zoned": false, 00:15:28.333 "supported_io_types": { 00:15:28.333 "read": true, 00:15:28.333 "write": true, 00:15:28.333 "unmap": true, 00:15:28.333 "flush": true, 00:15:28.333 "reset": true, 00:15:28.333 "nvme_admin": false, 00:15:28.333 "nvme_io": false, 00:15:28.333 "nvme_io_md": false, 00:15:28.333 "write_zeroes": true, 00:15:28.333 "zcopy": true, 00:15:28.333 "get_zone_info": false, 00:15:28.333 "zone_management": false, 00:15:28.333 "zone_append": false, 00:15:28.333 "compare": false, 00:15:28.333 "compare_and_write": false, 00:15:28.333 "abort": true, 00:15:28.333 "seek_hole": false, 00:15:28.333 "seek_data": false, 00:15:28.333 "copy": true, 00:15:28.333 "nvme_iov_md": false 00:15:28.333 }, 00:15:28.333 "memory_domains": [ 00:15:28.333 { 00:15:28.333 "dma_device_id": "system", 00:15:28.333 "dma_device_type": 1 00:15:28.333 }, 00:15:28.333 { 00:15:28.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.333 "dma_device_type": 2 00:15:28.333 } 00:15:28.333 ], 00:15:28.333 "driver_specific": {} 00:15:28.333 } 00:15:28.333 ] 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.333 "name": "Existed_Raid", 00:15:28.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.333 "strip_size_kb": 64, 00:15:28.333 "state": "configuring", 00:15:28.333 "raid_level": "raid5f", 00:15:28.333 "superblock": false, 00:15:28.333 "num_base_bdevs": 3, 00:15:28.333 "num_base_bdevs_discovered": 2, 00:15:28.333 "num_base_bdevs_operational": 3, 00:15:28.333 "base_bdevs_list": [ 00:15:28.333 { 00:15:28.333 "name": "BaseBdev1", 00:15:28.333 "uuid": "cbd5dd40-2774-40bf-aa9d-c3a862f47ae0", 00:15:28.333 "is_configured": true, 00:15:28.333 "data_offset": 0, 00:15:28.333 "data_size": 65536 00:15:28.333 }, 00:15:28.333 { 00:15:28.333 "name": null, 00:15:28.333 "uuid": "69fb6752-5c20-4ae7-803c-c415359ed8b9", 00:15:28.333 "is_configured": false, 00:15:28.333 "data_offset": 0, 00:15:28.333 "data_size": 65536 00:15:28.333 }, 00:15:28.333 { 00:15:28.333 "name": "BaseBdev3", 00:15:28.333 "uuid": "cd569150-cda1-4888-8883-c7fbc5d39c6a", 00:15:28.333 "is_configured": true, 00:15:28.333 "data_offset": 0, 00:15:28.333 "data_size": 65536 00:15:28.333 } 00:15:28.333 ] 00:15:28.333 }' 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.333 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.907 [2024-11-15 10:43:49.965218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.907 10:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.907 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.907 "name": "Existed_Raid", 00:15:28.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.907 "strip_size_kb": 64, 00:15:28.907 "state": "configuring", 00:15:28.907 "raid_level": "raid5f", 00:15:28.907 "superblock": false, 00:15:28.907 "num_base_bdevs": 3, 00:15:28.907 "num_base_bdevs_discovered": 1, 00:15:28.907 "num_base_bdevs_operational": 3, 00:15:28.907 "base_bdevs_list": [ 00:15:28.907 { 00:15:28.907 "name": "BaseBdev1", 00:15:28.907 "uuid": "cbd5dd40-2774-40bf-aa9d-c3a862f47ae0", 00:15:28.907 "is_configured": true, 00:15:28.907 "data_offset": 0, 00:15:28.907 "data_size": 65536 00:15:28.907 }, 00:15:28.907 { 00:15:28.907 "name": null, 00:15:28.907 "uuid": "69fb6752-5c20-4ae7-803c-c415359ed8b9", 00:15:28.907 "is_configured": false, 00:15:28.907 "data_offset": 0, 00:15:28.907 "data_size": 65536 00:15:28.907 }, 00:15:28.907 { 00:15:28.907 "name": null, 00:15:28.907 "uuid": "cd569150-cda1-4888-8883-c7fbc5d39c6a", 00:15:28.907 "is_configured": false, 00:15:28.907 "data_offset": 0, 00:15:28.907 "data_size": 65536 00:15:28.907 } 00:15:28.907 ] 00:15:28.907 }' 00:15:28.907 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.907 10:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.473 [2024-11-15 10:43:50.529382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.473 "name": "Existed_Raid", 00:15:29.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.473 "strip_size_kb": 64, 00:15:29.473 "state": "configuring", 00:15:29.473 "raid_level": "raid5f", 00:15:29.473 "superblock": false, 00:15:29.473 "num_base_bdevs": 3, 00:15:29.473 "num_base_bdevs_discovered": 2, 00:15:29.473 "num_base_bdevs_operational": 3, 00:15:29.473 "base_bdevs_list": [ 00:15:29.473 { 00:15:29.473 "name": "BaseBdev1", 00:15:29.473 "uuid": "cbd5dd40-2774-40bf-aa9d-c3a862f47ae0", 00:15:29.473 "is_configured": true, 00:15:29.473 "data_offset": 0, 00:15:29.473 "data_size": 65536 00:15:29.473 }, 00:15:29.473 { 00:15:29.473 "name": null, 00:15:29.473 "uuid": "69fb6752-5c20-4ae7-803c-c415359ed8b9", 00:15:29.473 "is_configured": false, 00:15:29.473 "data_offset": 0, 00:15:29.473 "data_size": 65536 00:15:29.473 }, 00:15:29.473 { 00:15:29.473 "name": "BaseBdev3", 00:15:29.473 "uuid": "cd569150-cda1-4888-8883-c7fbc5d39c6a", 00:15:29.473 "is_configured": true, 00:15:29.473 "data_offset": 0, 00:15:29.473 "data_size": 65536 00:15:29.473 } 00:15:29.473 ] 00:15:29.473 }' 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.473 10:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.038 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:30.038 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.038 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.038 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.038 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.038 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:30.038 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:30.038 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.038 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.038 [2024-11-15 10:43:51.121575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.296 "name": "Existed_Raid", 00:15:30.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.296 "strip_size_kb": 64, 00:15:30.296 "state": "configuring", 00:15:30.296 "raid_level": "raid5f", 00:15:30.296 "superblock": false, 00:15:30.296 "num_base_bdevs": 3, 00:15:30.296 "num_base_bdevs_discovered": 1, 00:15:30.296 "num_base_bdevs_operational": 3, 00:15:30.296 "base_bdevs_list": [ 00:15:30.296 { 00:15:30.296 "name": null, 00:15:30.296 "uuid": "cbd5dd40-2774-40bf-aa9d-c3a862f47ae0", 00:15:30.296 "is_configured": false, 00:15:30.296 "data_offset": 0, 00:15:30.296 "data_size": 65536 00:15:30.296 }, 00:15:30.296 { 00:15:30.296 "name": null, 00:15:30.296 "uuid": "69fb6752-5c20-4ae7-803c-c415359ed8b9", 00:15:30.296 "is_configured": false, 00:15:30.296 "data_offset": 0, 00:15:30.296 "data_size": 65536 00:15:30.296 }, 00:15:30.296 { 00:15:30.296 "name": "BaseBdev3", 00:15:30.296 "uuid": "cd569150-cda1-4888-8883-c7fbc5d39c6a", 00:15:30.296 "is_configured": true, 00:15:30.296 "data_offset": 0, 00:15:30.296 "data_size": 65536 00:15:30.296 } 00:15:30.296 ] 00:15:30.296 }' 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.296 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.554 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.554 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:30.554 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.554 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.554 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.810 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:30.810 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:30.810 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.810 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.810 [2024-11-15 10:43:51.749364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.810 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.810 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:30.810 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.810 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.810 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.810 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.810 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.811 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.811 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.811 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.811 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.811 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.811 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.811 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.811 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.811 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.811 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.811 "name": "Existed_Raid", 00:15:30.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.811 "strip_size_kb": 64, 00:15:30.811 "state": "configuring", 00:15:30.811 "raid_level": "raid5f", 00:15:30.811 "superblock": false, 00:15:30.811 "num_base_bdevs": 3, 00:15:30.811 "num_base_bdevs_discovered": 2, 00:15:30.811 "num_base_bdevs_operational": 3, 00:15:30.811 "base_bdevs_list": [ 00:15:30.811 { 00:15:30.811 "name": null, 00:15:30.811 "uuid": "cbd5dd40-2774-40bf-aa9d-c3a862f47ae0", 00:15:30.811 "is_configured": false, 00:15:30.811 "data_offset": 0, 00:15:30.811 "data_size": 65536 00:15:30.811 }, 00:15:30.811 { 00:15:30.811 "name": "BaseBdev2", 00:15:30.811 "uuid": "69fb6752-5c20-4ae7-803c-c415359ed8b9", 00:15:30.811 "is_configured": true, 00:15:30.811 "data_offset": 0, 00:15:30.811 "data_size": 65536 00:15:30.811 }, 00:15:30.811 { 00:15:30.811 "name": "BaseBdev3", 00:15:30.811 "uuid": "cd569150-cda1-4888-8883-c7fbc5d39c6a", 00:15:30.811 "is_configured": true, 00:15:30.811 "data_offset": 0, 00:15:30.811 "data_size": 65536 00:15:30.811 } 00:15:30.811 ] 00:15:30.811 }' 00:15:30.811 10:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.811 10:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.375 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.375 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.375 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.375 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:31.375 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.375 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:31.375 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.375 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.375 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.375 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:31.375 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cbd5dd40-2774-40bf-aa9d-c3a862f47ae0 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.376 [2024-11-15 10:43:52.391667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:31.376 [2024-11-15 10:43:52.391893] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:31.376 [2024-11-15 10:43:52.391925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:31.376 [2024-11-15 10:43:52.392271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:31.376 [2024-11-15 10:43:52.397314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:31.376 [2024-11-15 10:43:52.397341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:31.376 [2024-11-15 10:43:52.397723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.376 NewBaseBdev 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.376 [ 00:15:31.376 { 00:15:31.376 "name": "NewBaseBdev", 00:15:31.376 "aliases": [ 00:15:31.376 "cbd5dd40-2774-40bf-aa9d-c3a862f47ae0" 00:15:31.376 ], 00:15:31.376 "product_name": "Malloc disk", 00:15:31.376 "block_size": 512, 00:15:31.376 "num_blocks": 65536, 00:15:31.376 "uuid": "cbd5dd40-2774-40bf-aa9d-c3a862f47ae0", 00:15:31.376 "assigned_rate_limits": { 00:15:31.376 "rw_ios_per_sec": 0, 00:15:31.376 "rw_mbytes_per_sec": 0, 00:15:31.376 "r_mbytes_per_sec": 0, 00:15:31.376 "w_mbytes_per_sec": 0 00:15:31.376 }, 00:15:31.376 "claimed": true, 00:15:31.376 "claim_type": "exclusive_write", 00:15:31.376 "zoned": false, 00:15:31.376 "supported_io_types": { 00:15:31.376 "read": true, 00:15:31.376 "write": true, 00:15:31.376 "unmap": true, 00:15:31.376 "flush": true, 00:15:31.376 "reset": true, 00:15:31.376 "nvme_admin": false, 00:15:31.376 "nvme_io": false, 00:15:31.376 "nvme_io_md": false, 00:15:31.376 "write_zeroes": true, 00:15:31.376 "zcopy": true, 00:15:31.376 "get_zone_info": false, 00:15:31.376 "zone_management": false, 00:15:31.376 "zone_append": false, 00:15:31.376 "compare": false, 00:15:31.376 "compare_and_write": false, 00:15:31.376 "abort": true, 00:15:31.376 "seek_hole": false, 00:15:31.376 "seek_data": false, 00:15:31.376 "copy": true, 00:15:31.376 "nvme_iov_md": false 00:15:31.376 }, 00:15:31.376 "memory_domains": [ 00:15:31.376 { 00:15:31.376 "dma_device_id": "system", 00:15:31.376 "dma_device_type": 1 00:15:31.376 }, 00:15:31.376 { 00:15:31.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.376 "dma_device_type": 2 00:15:31.376 } 00:15:31.376 ], 00:15:31.376 "driver_specific": {} 00:15:31.376 } 00:15:31.376 ] 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.376 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.376 "name": "Existed_Raid", 00:15:31.376 "uuid": "4b3cd497-d174-4505-a41a-e66c64f37c84", 00:15:31.376 "strip_size_kb": 64, 00:15:31.376 "state": "online", 00:15:31.376 "raid_level": "raid5f", 00:15:31.376 "superblock": false, 00:15:31.376 "num_base_bdevs": 3, 00:15:31.376 "num_base_bdevs_discovered": 3, 00:15:31.376 "num_base_bdevs_operational": 3, 00:15:31.376 "base_bdevs_list": [ 00:15:31.376 { 00:15:31.376 "name": "NewBaseBdev", 00:15:31.376 "uuid": "cbd5dd40-2774-40bf-aa9d-c3a862f47ae0", 00:15:31.376 "is_configured": true, 00:15:31.376 "data_offset": 0, 00:15:31.376 "data_size": 65536 00:15:31.376 }, 00:15:31.376 { 00:15:31.376 "name": "BaseBdev2", 00:15:31.376 "uuid": "69fb6752-5c20-4ae7-803c-c415359ed8b9", 00:15:31.376 "is_configured": true, 00:15:31.376 "data_offset": 0, 00:15:31.376 "data_size": 65536 00:15:31.376 }, 00:15:31.376 { 00:15:31.376 "name": "BaseBdev3", 00:15:31.376 "uuid": "cd569150-cda1-4888-8883-c7fbc5d39c6a", 00:15:31.377 "is_configured": true, 00:15:31.377 "data_offset": 0, 00:15:31.377 "data_size": 65536 00:15:31.377 } 00:15:31.377 ] 00:15:31.377 }' 00:15:31.377 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.377 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.943 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:31.943 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:31.943 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:31.943 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:31.943 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:31.943 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:31.943 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:31.943 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.943 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:31.943 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.943 [2024-11-15 10:43:52.927930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.943 10:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.943 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:31.943 "name": "Existed_Raid", 00:15:31.943 "aliases": [ 00:15:31.943 "4b3cd497-d174-4505-a41a-e66c64f37c84" 00:15:31.943 ], 00:15:31.943 "product_name": "Raid Volume", 00:15:31.943 "block_size": 512, 00:15:31.943 "num_blocks": 131072, 00:15:31.943 "uuid": "4b3cd497-d174-4505-a41a-e66c64f37c84", 00:15:31.943 "assigned_rate_limits": { 00:15:31.943 "rw_ios_per_sec": 0, 00:15:31.943 "rw_mbytes_per_sec": 0, 00:15:31.943 "r_mbytes_per_sec": 0, 00:15:31.943 "w_mbytes_per_sec": 0 00:15:31.943 }, 00:15:31.943 "claimed": false, 00:15:31.943 "zoned": false, 00:15:31.943 "supported_io_types": { 00:15:31.943 "read": true, 00:15:31.943 "write": true, 00:15:31.943 "unmap": false, 00:15:31.943 "flush": false, 00:15:31.943 "reset": true, 00:15:31.943 "nvme_admin": false, 00:15:31.943 "nvme_io": false, 00:15:31.943 "nvme_io_md": false, 00:15:31.943 "write_zeroes": true, 00:15:31.943 "zcopy": false, 00:15:31.943 "get_zone_info": false, 00:15:31.943 "zone_management": false, 00:15:31.943 "zone_append": false, 00:15:31.943 "compare": false, 00:15:31.943 "compare_and_write": false, 00:15:31.943 "abort": false, 00:15:31.943 "seek_hole": false, 00:15:31.943 "seek_data": false, 00:15:31.943 "copy": false, 00:15:31.943 "nvme_iov_md": false 00:15:31.943 }, 00:15:31.943 "driver_specific": { 00:15:31.943 "raid": { 00:15:31.943 "uuid": "4b3cd497-d174-4505-a41a-e66c64f37c84", 00:15:31.943 "strip_size_kb": 64, 00:15:31.943 "state": "online", 00:15:31.943 "raid_level": "raid5f", 00:15:31.943 "superblock": false, 00:15:31.943 "num_base_bdevs": 3, 00:15:31.943 "num_base_bdevs_discovered": 3, 00:15:31.943 "num_base_bdevs_operational": 3, 00:15:31.943 "base_bdevs_list": [ 00:15:31.943 { 00:15:31.943 "name": "NewBaseBdev", 00:15:31.943 "uuid": "cbd5dd40-2774-40bf-aa9d-c3a862f47ae0", 00:15:31.943 "is_configured": true, 00:15:31.943 "data_offset": 0, 00:15:31.943 "data_size": 65536 00:15:31.943 }, 00:15:31.943 { 00:15:31.943 "name": "BaseBdev2", 00:15:31.943 "uuid": "69fb6752-5c20-4ae7-803c-c415359ed8b9", 00:15:31.943 "is_configured": true, 00:15:31.943 "data_offset": 0, 00:15:31.943 "data_size": 65536 00:15:31.943 }, 00:15:31.943 { 00:15:31.943 "name": "BaseBdev3", 00:15:31.943 "uuid": "cd569150-cda1-4888-8883-c7fbc5d39c6a", 00:15:31.943 "is_configured": true, 00:15:31.943 "data_offset": 0, 00:15:31.943 "data_size": 65536 00:15:31.943 } 00:15:31.943 ] 00:15:31.943 } 00:15:31.943 } 00:15:31.943 }' 00:15:31.943 10:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.943 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:31.943 BaseBdev2 00:15:31.943 BaseBdev3' 00:15:31.943 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.943 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:31.943 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.943 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:31.943 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.943 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.943 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.943 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.202 [2024-11-15 10:43:53.227742] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.202 [2024-11-15 10:43:53.227787] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.202 [2024-11-15 10:43:53.227877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.202 [2024-11-15 10:43:53.228258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.202 [2024-11-15 10:43:53.228286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80195 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80195 ']' 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80195 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80195 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.202 killing process with pid 80195 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80195' 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80195 00:15:32.202 [2024-11-15 10:43:53.261835] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:32.202 10:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80195 00:15:32.460 [2024-11-15 10:43:53.531220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:33.836 00:15:33.836 real 0m11.713s 00:15:33.836 user 0m19.430s 00:15:33.836 sys 0m1.635s 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.836 ************************************ 00:15:33.836 END TEST raid5f_state_function_test 00:15:33.836 ************************************ 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.836 10:43:54 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:33.836 10:43:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:33.836 10:43:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.836 10:43:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:33.836 ************************************ 00:15:33.836 START TEST raid5f_state_function_test_sb 00:15:33.836 ************************************ 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80823 00:15:33.836 Process raid pid: 80823 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80823' 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80823 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80823 ']' 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.836 10:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.836 [2024-11-15 10:43:54.733140] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:15:33.836 [2024-11-15 10:43:54.733314] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.836 [2024-11-15 10:43:54.917789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.095 [2024-11-15 10:43:55.048563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.095 [2024-11-15 10:43:55.253609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.095 [2024-11-15 10:43:55.253659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.660 10:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.660 10:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:34.660 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:34.660 10:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.660 10:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.660 [2024-11-15 10:43:55.713742] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.660 [2024-11-15 10:43:55.713798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.660 [2024-11-15 10:43:55.713814] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.660 [2024-11-15 10:43:55.713830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.660 [2024-11-15 10:43:55.713840] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.660 [2024-11-15 10:43:55.713855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.660 10:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.661 "name": "Existed_Raid", 00:15:34.661 "uuid": "581f0734-33d6-4d30-93b6-3c33493faca4", 00:15:34.661 "strip_size_kb": 64, 00:15:34.661 "state": "configuring", 00:15:34.661 "raid_level": "raid5f", 00:15:34.661 "superblock": true, 00:15:34.661 "num_base_bdevs": 3, 00:15:34.661 "num_base_bdevs_discovered": 0, 00:15:34.661 "num_base_bdevs_operational": 3, 00:15:34.661 "base_bdevs_list": [ 00:15:34.661 { 00:15:34.661 "name": "BaseBdev1", 00:15:34.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.661 "is_configured": false, 00:15:34.661 "data_offset": 0, 00:15:34.661 "data_size": 0 00:15:34.661 }, 00:15:34.661 { 00:15:34.661 "name": "BaseBdev2", 00:15:34.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.661 "is_configured": false, 00:15:34.661 "data_offset": 0, 00:15:34.661 "data_size": 0 00:15:34.661 }, 00:15:34.661 { 00:15:34.661 "name": "BaseBdev3", 00:15:34.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.661 "is_configured": false, 00:15:34.661 "data_offset": 0, 00:15:34.661 "data_size": 0 00:15:34.661 } 00:15:34.661 ] 00:15:34.661 }' 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.661 10:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.227 [2024-11-15 10:43:56.225819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.227 [2024-11-15 10:43:56.225877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.227 [2024-11-15 10:43:56.233807] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:35.227 [2024-11-15 10:43:56.233857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:35.227 [2024-11-15 10:43:56.233876] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.227 [2024-11-15 10:43:56.233892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.227 [2024-11-15 10:43:56.233901] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:35.227 [2024-11-15 10:43:56.233916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.227 [2024-11-15 10:43:56.278559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.227 BaseBdev1 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.227 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.227 [ 00:15:35.227 { 00:15:35.227 "name": "BaseBdev1", 00:15:35.227 "aliases": [ 00:15:35.227 "79e8051f-2af4-485c-85d9-ba4706ac2ac4" 00:15:35.227 ], 00:15:35.227 "product_name": "Malloc disk", 00:15:35.227 "block_size": 512, 00:15:35.227 "num_blocks": 65536, 00:15:35.227 "uuid": "79e8051f-2af4-485c-85d9-ba4706ac2ac4", 00:15:35.227 "assigned_rate_limits": { 00:15:35.227 "rw_ios_per_sec": 0, 00:15:35.227 "rw_mbytes_per_sec": 0, 00:15:35.227 "r_mbytes_per_sec": 0, 00:15:35.227 "w_mbytes_per_sec": 0 00:15:35.227 }, 00:15:35.227 "claimed": true, 00:15:35.227 "claim_type": "exclusive_write", 00:15:35.227 "zoned": false, 00:15:35.227 "supported_io_types": { 00:15:35.227 "read": true, 00:15:35.227 "write": true, 00:15:35.227 "unmap": true, 00:15:35.227 "flush": true, 00:15:35.227 "reset": true, 00:15:35.227 "nvme_admin": false, 00:15:35.227 "nvme_io": false, 00:15:35.227 "nvme_io_md": false, 00:15:35.227 "write_zeroes": true, 00:15:35.227 "zcopy": true, 00:15:35.227 "get_zone_info": false, 00:15:35.227 "zone_management": false, 00:15:35.227 "zone_append": false, 00:15:35.227 "compare": false, 00:15:35.227 "compare_and_write": false, 00:15:35.227 "abort": true, 00:15:35.227 "seek_hole": false, 00:15:35.227 "seek_data": false, 00:15:35.227 "copy": true, 00:15:35.227 "nvme_iov_md": false 00:15:35.227 }, 00:15:35.227 "memory_domains": [ 00:15:35.227 { 00:15:35.227 "dma_device_id": "system", 00:15:35.227 "dma_device_type": 1 00:15:35.227 }, 00:15:35.227 { 00:15:35.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.227 "dma_device_type": 2 00:15:35.227 } 00:15:35.227 ], 00:15:35.227 "driver_specific": {} 00:15:35.227 } 00:15:35.227 ] 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.228 "name": "Existed_Raid", 00:15:35.228 "uuid": "5fc10e87-b887-4c1d-950e-045c7b6b6dda", 00:15:35.228 "strip_size_kb": 64, 00:15:35.228 "state": "configuring", 00:15:35.228 "raid_level": "raid5f", 00:15:35.228 "superblock": true, 00:15:35.228 "num_base_bdevs": 3, 00:15:35.228 "num_base_bdevs_discovered": 1, 00:15:35.228 "num_base_bdevs_operational": 3, 00:15:35.228 "base_bdevs_list": [ 00:15:35.228 { 00:15:35.228 "name": "BaseBdev1", 00:15:35.228 "uuid": "79e8051f-2af4-485c-85d9-ba4706ac2ac4", 00:15:35.228 "is_configured": true, 00:15:35.228 "data_offset": 2048, 00:15:35.228 "data_size": 63488 00:15:35.228 }, 00:15:35.228 { 00:15:35.228 "name": "BaseBdev2", 00:15:35.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.228 "is_configured": false, 00:15:35.228 "data_offset": 0, 00:15:35.228 "data_size": 0 00:15:35.228 }, 00:15:35.228 { 00:15:35.228 "name": "BaseBdev3", 00:15:35.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.228 "is_configured": false, 00:15:35.228 "data_offset": 0, 00:15:35.228 "data_size": 0 00:15:35.228 } 00:15:35.228 ] 00:15:35.228 }' 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.228 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.794 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:35.794 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.794 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.794 [2024-11-15 10:43:56.830751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.794 [2024-11-15 10:43:56.830815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:35.794 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.794 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:35.794 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.794 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.794 [2024-11-15 10:43:56.838806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.794 [2024-11-15 10:43:56.841246] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.794 [2024-11-15 10:43:56.841310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.794 [2024-11-15 10:43:56.841326] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:35.794 [2024-11-15 10:43:56.841340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:35.794 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.794 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:35.794 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.794 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.794 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.794 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.795 "name": "Existed_Raid", 00:15:35.795 "uuid": "c7db0a38-7d36-437b-83e9-bb7f283effb0", 00:15:35.795 "strip_size_kb": 64, 00:15:35.795 "state": "configuring", 00:15:35.795 "raid_level": "raid5f", 00:15:35.795 "superblock": true, 00:15:35.795 "num_base_bdevs": 3, 00:15:35.795 "num_base_bdevs_discovered": 1, 00:15:35.795 "num_base_bdevs_operational": 3, 00:15:35.795 "base_bdevs_list": [ 00:15:35.795 { 00:15:35.795 "name": "BaseBdev1", 00:15:35.795 "uuid": "79e8051f-2af4-485c-85d9-ba4706ac2ac4", 00:15:35.795 "is_configured": true, 00:15:35.795 "data_offset": 2048, 00:15:35.795 "data_size": 63488 00:15:35.795 }, 00:15:35.795 { 00:15:35.795 "name": "BaseBdev2", 00:15:35.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.795 "is_configured": false, 00:15:35.795 "data_offset": 0, 00:15:35.795 "data_size": 0 00:15:35.795 }, 00:15:35.795 { 00:15:35.795 "name": "BaseBdev3", 00:15:35.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.795 "is_configured": false, 00:15:35.795 "data_offset": 0, 00:15:35.795 "data_size": 0 00:15:35.795 } 00:15:35.795 ] 00:15:35.795 }' 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.795 10:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.362 [2024-11-15 10:43:57.409439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.362 BaseBdev2 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.362 [ 00:15:36.362 { 00:15:36.362 "name": "BaseBdev2", 00:15:36.362 "aliases": [ 00:15:36.362 "915bd60d-9d4f-40c4-a9c4-1cdf9d11b481" 00:15:36.362 ], 00:15:36.362 "product_name": "Malloc disk", 00:15:36.362 "block_size": 512, 00:15:36.362 "num_blocks": 65536, 00:15:36.362 "uuid": "915bd60d-9d4f-40c4-a9c4-1cdf9d11b481", 00:15:36.362 "assigned_rate_limits": { 00:15:36.362 "rw_ios_per_sec": 0, 00:15:36.362 "rw_mbytes_per_sec": 0, 00:15:36.362 "r_mbytes_per_sec": 0, 00:15:36.362 "w_mbytes_per_sec": 0 00:15:36.362 }, 00:15:36.362 "claimed": true, 00:15:36.362 "claim_type": "exclusive_write", 00:15:36.362 "zoned": false, 00:15:36.362 "supported_io_types": { 00:15:36.362 "read": true, 00:15:36.362 "write": true, 00:15:36.362 "unmap": true, 00:15:36.362 "flush": true, 00:15:36.362 "reset": true, 00:15:36.362 "nvme_admin": false, 00:15:36.362 "nvme_io": false, 00:15:36.362 "nvme_io_md": false, 00:15:36.362 "write_zeroes": true, 00:15:36.362 "zcopy": true, 00:15:36.362 "get_zone_info": false, 00:15:36.362 "zone_management": false, 00:15:36.362 "zone_append": false, 00:15:36.362 "compare": false, 00:15:36.362 "compare_and_write": false, 00:15:36.362 "abort": true, 00:15:36.362 "seek_hole": false, 00:15:36.362 "seek_data": false, 00:15:36.362 "copy": true, 00:15:36.362 "nvme_iov_md": false 00:15:36.362 }, 00:15:36.362 "memory_domains": [ 00:15:36.362 { 00:15:36.362 "dma_device_id": "system", 00:15:36.362 "dma_device_type": 1 00:15:36.362 }, 00:15:36.362 { 00:15:36.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.362 "dma_device_type": 2 00:15:36.362 } 00:15:36.362 ], 00:15:36.362 "driver_specific": {} 00:15:36.362 } 00:15:36.362 ] 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.362 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.363 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.363 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.363 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.363 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.363 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.363 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.363 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.363 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.363 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.363 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.363 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.363 "name": "Existed_Raid", 00:15:36.363 "uuid": "c7db0a38-7d36-437b-83e9-bb7f283effb0", 00:15:36.363 "strip_size_kb": 64, 00:15:36.363 "state": "configuring", 00:15:36.363 "raid_level": "raid5f", 00:15:36.363 "superblock": true, 00:15:36.363 "num_base_bdevs": 3, 00:15:36.363 "num_base_bdevs_discovered": 2, 00:15:36.363 "num_base_bdevs_operational": 3, 00:15:36.363 "base_bdevs_list": [ 00:15:36.363 { 00:15:36.363 "name": "BaseBdev1", 00:15:36.363 "uuid": "79e8051f-2af4-485c-85d9-ba4706ac2ac4", 00:15:36.363 "is_configured": true, 00:15:36.363 "data_offset": 2048, 00:15:36.363 "data_size": 63488 00:15:36.363 }, 00:15:36.363 { 00:15:36.363 "name": "BaseBdev2", 00:15:36.363 "uuid": "915bd60d-9d4f-40c4-a9c4-1cdf9d11b481", 00:15:36.363 "is_configured": true, 00:15:36.363 "data_offset": 2048, 00:15:36.363 "data_size": 63488 00:15:36.363 }, 00:15:36.363 { 00:15:36.363 "name": "BaseBdev3", 00:15:36.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.363 "is_configured": false, 00:15:36.363 "data_offset": 0, 00:15:36.363 "data_size": 0 00:15:36.363 } 00:15:36.363 ] 00:15:36.363 }' 00:15:36.363 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.363 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.929 10:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:36.929 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.929 10:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.929 [2024-11-15 10:43:58.008899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.929 [2024-11-15 10:43:58.009223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:36.929 [2024-11-15 10:43:58.009256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:36.929 BaseBdev3 00:15:36.929 [2024-11-15 10:43:58.009614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:36.929 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.930 [2024-11-15 10:43:58.014878] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:36.930 [2024-11-15 10:43:58.014908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:36.930 [2024-11-15 10:43:58.015243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.930 [ 00:15:36.930 { 00:15:36.930 "name": "BaseBdev3", 00:15:36.930 "aliases": [ 00:15:36.930 "e1c5a90f-e73e-4145-ba61-9416b9be567d" 00:15:36.930 ], 00:15:36.930 "product_name": "Malloc disk", 00:15:36.930 "block_size": 512, 00:15:36.930 "num_blocks": 65536, 00:15:36.930 "uuid": "e1c5a90f-e73e-4145-ba61-9416b9be567d", 00:15:36.930 "assigned_rate_limits": { 00:15:36.930 "rw_ios_per_sec": 0, 00:15:36.930 "rw_mbytes_per_sec": 0, 00:15:36.930 "r_mbytes_per_sec": 0, 00:15:36.930 "w_mbytes_per_sec": 0 00:15:36.930 }, 00:15:36.930 "claimed": true, 00:15:36.930 "claim_type": "exclusive_write", 00:15:36.930 "zoned": false, 00:15:36.930 "supported_io_types": { 00:15:36.930 "read": true, 00:15:36.930 "write": true, 00:15:36.930 "unmap": true, 00:15:36.930 "flush": true, 00:15:36.930 "reset": true, 00:15:36.930 "nvme_admin": false, 00:15:36.930 "nvme_io": false, 00:15:36.930 "nvme_io_md": false, 00:15:36.930 "write_zeroes": true, 00:15:36.930 "zcopy": true, 00:15:36.930 "get_zone_info": false, 00:15:36.930 "zone_management": false, 00:15:36.930 "zone_append": false, 00:15:36.930 "compare": false, 00:15:36.930 "compare_and_write": false, 00:15:36.930 "abort": true, 00:15:36.930 "seek_hole": false, 00:15:36.930 "seek_data": false, 00:15:36.930 "copy": true, 00:15:36.930 "nvme_iov_md": false 00:15:36.930 }, 00:15:36.930 "memory_domains": [ 00:15:36.930 { 00:15:36.930 "dma_device_id": "system", 00:15:36.930 "dma_device_type": 1 00:15:36.930 }, 00:15:36.930 { 00:15:36.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.930 "dma_device_type": 2 00:15:36.930 } 00:15:36.930 ], 00:15:36.930 "driver_specific": {} 00:15:36.930 } 00:15:36.930 ] 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.930 "name": "Existed_Raid", 00:15:36.930 "uuid": "c7db0a38-7d36-437b-83e9-bb7f283effb0", 00:15:36.930 "strip_size_kb": 64, 00:15:36.930 "state": "online", 00:15:36.930 "raid_level": "raid5f", 00:15:36.930 "superblock": true, 00:15:36.930 "num_base_bdevs": 3, 00:15:36.930 "num_base_bdevs_discovered": 3, 00:15:36.930 "num_base_bdevs_operational": 3, 00:15:36.930 "base_bdevs_list": [ 00:15:36.930 { 00:15:36.930 "name": "BaseBdev1", 00:15:36.930 "uuid": "79e8051f-2af4-485c-85d9-ba4706ac2ac4", 00:15:36.930 "is_configured": true, 00:15:36.930 "data_offset": 2048, 00:15:36.930 "data_size": 63488 00:15:36.930 }, 00:15:36.930 { 00:15:36.930 "name": "BaseBdev2", 00:15:36.930 "uuid": "915bd60d-9d4f-40c4-a9c4-1cdf9d11b481", 00:15:36.930 "is_configured": true, 00:15:36.930 "data_offset": 2048, 00:15:36.930 "data_size": 63488 00:15:36.930 }, 00:15:36.930 { 00:15:36.930 "name": "BaseBdev3", 00:15:36.930 "uuid": "e1c5a90f-e73e-4145-ba61-9416b9be567d", 00:15:36.930 "is_configured": true, 00:15:36.930 "data_offset": 2048, 00:15:36.930 "data_size": 63488 00:15:36.930 } 00:15:36.930 ] 00:15:36.930 }' 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.930 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.496 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:37.496 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:37.496 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:37.496 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:37.496 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:37.496 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:37.496 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:37.496 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:37.496 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.496 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.496 [2024-11-15 10:43:58.541234] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.496 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.496 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:37.496 "name": "Existed_Raid", 00:15:37.496 "aliases": [ 00:15:37.496 "c7db0a38-7d36-437b-83e9-bb7f283effb0" 00:15:37.496 ], 00:15:37.496 "product_name": "Raid Volume", 00:15:37.496 "block_size": 512, 00:15:37.496 "num_blocks": 126976, 00:15:37.496 "uuid": "c7db0a38-7d36-437b-83e9-bb7f283effb0", 00:15:37.496 "assigned_rate_limits": { 00:15:37.496 "rw_ios_per_sec": 0, 00:15:37.496 "rw_mbytes_per_sec": 0, 00:15:37.496 "r_mbytes_per_sec": 0, 00:15:37.496 "w_mbytes_per_sec": 0 00:15:37.496 }, 00:15:37.496 "claimed": false, 00:15:37.496 "zoned": false, 00:15:37.496 "supported_io_types": { 00:15:37.496 "read": true, 00:15:37.496 "write": true, 00:15:37.496 "unmap": false, 00:15:37.496 "flush": false, 00:15:37.496 "reset": true, 00:15:37.496 "nvme_admin": false, 00:15:37.496 "nvme_io": false, 00:15:37.496 "nvme_io_md": false, 00:15:37.496 "write_zeroes": true, 00:15:37.496 "zcopy": false, 00:15:37.496 "get_zone_info": false, 00:15:37.496 "zone_management": false, 00:15:37.496 "zone_append": false, 00:15:37.496 "compare": false, 00:15:37.496 "compare_and_write": false, 00:15:37.496 "abort": false, 00:15:37.496 "seek_hole": false, 00:15:37.496 "seek_data": false, 00:15:37.496 "copy": false, 00:15:37.496 "nvme_iov_md": false 00:15:37.496 }, 00:15:37.496 "driver_specific": { 00:15:37.496 "raid": { 00:15:37.496 "uuid": "c7db0a38-7d36-437b-83e9-bb7f283effb0", 00:15:37.496 "strip_size_kb": 64, 00:15:37.496 "state": "online", 00:15:37.496 "raid_level": "raid5f", 00:15:37.496 "superblock": true, 00:15:37.496 "num_base_bdevs": 3, 00:15:37.496 "num_base_bdevs_discovered": 3, 00:15:37.496 "num_base_bdevs_operational": 3, 00:15:37.496 "base_bdevs_list": [ 00:15:37.496 { 00:15:37.496 "name": "BaseBdev1", 00:15:37.496 "uuid": "79e8051f-2af4-485c-85d9-ba4706ac2ac4", 00:15:37.496 "is_configured": true, 00:15:37.496 "data_offset": 2048, 00:15:37.496 "data_size": 63488 00:15:37.496 }, 00:15:37.496 { 00:15:37.496 "name": "BaseBdev2", 00:15:37.496 "uuid": "915bd60d-9d4f-40c4-a9c4-1cdf9d11b481", 00:15:37.496 "is_configured": true, 00:15:37.496 "data_offset": 2048, 00:15:37.497 "data_size": 63488 00:15:37.497 }, 00:15:37.497 { 00:15:37.497 "name": "BaseBdev3", 00:15:37.497 "uuid": "e1c5a90f-e73e-4145-ba61-9416b9be567d", 00:15:37.497 "is_configured": true, 00:15:37.497 "data_offset": 2048, 00:15:37.497 "data_size": 63488 00:15:37.497 } 00:15:37.497 ] 00:15:37.497 } 00:15:37.497 } 00:15:37.497 }' 00:15:37.497 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.497 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:37.497 BaseBdev2 00:15:37.497 BaseBdev3' 00:15:37.497 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.755 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.755 [2024-11-15 10:43:58.833087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.014 "name": "Existed_Raid", 00:15:38.014 "uuid": "c7db0a38-7d36-437b-83e9-bb7f283effb0", 00:15:38.014 "strip_size_kb": 64, 00:15:38.014 "state": "online", 00:15:38.014 "raid_level": "raid5f", 00:15:38.014 "superblock": true, 00:15:38.014 "num_base_bdevs": 3, 00:15:38.014 "num_base_bdevs_discovered": 2, 00:15:38.014 "num_base_bdevs_operational": 2, 00:15:38.014 "base_bdevs_list": [ 00:15:38.014 { 00:15:38.014 "name": null, 00:15:38.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.014 "is_configured": false, 00:15:38.014 "data_offset": 0, 00:15:38.014 "data_size": 63488 00:15:38.014 }, 00:15:38.014 { 00:15:38.014 "name": "BaseBdev2", 00:15:38.014 "uuid": "915bd60d-9d4f-40c4-a9c4-1cdf9d11b481", 00:15:38.014 "is_configured": true, 00:15:38.014 "data_offset": 2048, 00:15:38.014 "data_size": 63488 00:15:38.014 }, 00:15:38.014 { 00:15:38.014 "name": "BaseBdev3", 00:15:38.014 "uuid": "e1c5a90f-e73e-4145-ba61-9416b9be567d", 00:15:38.014 "is_configured": true, 00:15:38.014 "data_offset": 2048, 00:15:38.014 "data_size": 63488 00:15:38.014 } 00:15:38.014 ] 00:15:38.014 }' 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.014 10:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.273 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:38.273 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.273 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.273 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:38.273 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.273 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.273 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.549 [2024-11-15 10:43:59.452355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:38.549 [2024-11-15 10:43:59.452563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.549 [2024-11-15 10:43:59.537542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.549 [2024-11-15 10:43:59.597630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:38.549 [2024-11-15 10:43:59.597700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.549 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.808 BaseBdev2 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.808 [ 00:15:38.808 { 00:15:38.808 "name": "BaseBdev2", 00:15:38.808 "aliases": [ 00:15:38.808 "5fa8b6f7-928c-45c8-aef0-c14871f746c6" 00:15:38.808 ], 00:15:38.808 "product_name": "Malloc disk", 00:15:38.808 "block_size": 512, 00:15:38.808 "num_blocks": 65536, 00:15:38.808 "uuid": "5fa8b6f7-928c-45c8-aef0-c14871f746c6", 00:15:38.808 "assigned_rate_limits": { 00:15:38.808 "rw_ios_per_sec": 0, 00:15:38.808 "rw_mbytes_per_sec": 0, 00:15:38.808 "r_mbytes_per_sec": 0, 00:15:38.808 "w_mbytes_per_sec": 0 00:15:38.808 }, 00:15:38.808 "claimed": false, 00:15:38.808 "zoned": false, 00:15:38.808 "supported_io_types": { 00:15:38.808 "read": true, 00:15:38.808 "write": true, 00:15:38.808 "unmap": true, 00:15:38.808 "flush": true, 00:15:38.808 "reset": true, 00:15:38.808 "nvme_admin": false, 00:15:38.808 "nvme_io": false, 00:15:38.808 "nvme_io_md": false, 00:15:38.808 "write_zeroes": true, 00:15:38.808 "zcopy": true, 00:15:38.808 "get_zone_info": false, 00:15:38.808 "zone_management": false, 00:15:38.808 "zone_append": false, 00:15:38.808 "compare": false, 00:15:38.808 "compare_and_write": false, 00:15:38.808 "abort": true, 00:15:38.808 "seek_hole": false, 00:15:38.808 "seek_data": false, 00:15:38.808 "copy": true, 00:15:38.808 "nvme_iov_md": false 00:15:38.808 }, 00:15:38.808 "memory_domains": [ 00:15:38.808 { 00:15:38.808 "dma_device_id": "system", 00:15:38.808 "dma_device_type": 1 00:15:38.808 }, 00:15:38.808 { 00:15:38.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.808 "dma_device_type": 2 00:15:38.808 } 00:15:38.808 ], 00:15:38.808 "driver_specific": {} 00:15:38.808 } 00:15:38.808 ] 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.808 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.808 BaseBdev3 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.809 [ 00:15:38.809 { 00:15:38.809 "name": "BaseBdev3", 00:15:38.809 "aliases": [ 00:15:38.809 "9be4660d-c1e1-4bfa-9f5a-17db86dc18e3" 00:15:38.809 ], 00:15:38.809 "product_name": "Malloc disk", 00:15:38.809 "block_size": 512, 00:15:38.809 "num_blocks": 65536, 00:15:38.809 "uuid": "9be4660d-c1e1-4bfa-9f5a-17db86dc18e3", 00:15:38.809 "assigned_rate_limits": { 00:15:38.809 "rw_ios_per_sec": 0, 00:15:38.809 "rw_mbytes_per_sec": 0, 00:15:38.809 "r_mbytes_per_sec": 0, 00:15:38.809 "w_mbytes_per_sec": 0 00:15:38.809 }, 00:15:38.809 "claimed": false, 00:15:38.809 "zoned": false, 00:15:38.809 "supported_io_types": { 00:15:38.809 "read": true, 00:15:38.809 "write": true, 00:15:38.809 "unmap": true, 00:15:38.809 "flush": true, 00:15:38.809 "reset": true, 00:15:38.809 "nvme_admin": false, 00:15:38.809 "nvme_io": false, 00:15:38.809 "nvme_io_md": false, 00:15:38.809 "write_zeroes": true, 00:15:38.809 "zcopy": true, 00:15:38.809 "get_zone_info": false, 00:15:38.809 "zone_management": false, 00:15:38.809 "zone_append": false, 00:15:38.809 "compare": false, 00:15:38.809 "compare_and_write": false, 00:15:38.809 "abort": true, 00:15:38.809 "seek_hole": false, 00:15:38.809 "seek_data": false, 00:15:38.809 "copy": true, 00:15:38.809 "nvme_iov_md": false 00:15:38.809 }, 00:15:38.809 "memory_domains": [ 00:15:38.809 { 00:15:38.809 "dma_device_id": "system", 00:15:38.809 "dma_device_type": 1 00:15:38.809 }, 00:15:38.809 { 00:15:38.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.809 "dma_device_type": 2 00:15:38.809 } 00:15:38.809 ], 00:15:38.809 "driver_specific": {} 00:15:38.809 } 00:15:38.809 ] 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.809 [2024-11-15 10:43:59.873939] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.809 [2024-11-15 10:43:59.873992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.809 [2024-11-15 10:43:59.874026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.809 [2024-11-15 10:43:59.876436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.809 "name": "Existed_Raid", 00:15:38.809 "uuid": "ce1b4361-7281-43cf-a592-c780a4e198cb", 00:15:38.809 "strip_size_kb": 64, 00:15:38.809 "state": "configuring", 00:15:38.809 "raid_level": "raid5f", 00:15:38.809 "superblock": true, 00:15:38.809 "num_base_bdevs": 3, 00:15:38.809 "num_base_bdevs_discovered": 2, 00:15:38.809 "num_base_bdevs_operational": 3, 00:15:38.809 "base_bdevs_list": [ 00:15:38.809 { 00:15:38.809 "name": "BaseBdev1", 00:15:38.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.809 "is_configured": false, 00:15:38.809 "data_offset": 0, 00:15:38.809 "data_size": 0 00:15:38.809 }, 00:15:38.809 { 00:15:38.809 "name": "BaseBdev2", 00:15:38.809 "uuid": "5fa8b6f7-928c-45c8-aef0-c14871f746c6", 00:15:38.809 "is_configured": true, 00:15:38.809 "data_offset": 2048, 00:15:38.809 "data_size": 63488 00:15:38.809 }, 00:15:38.809 { 00:15:38.809 "name": "BaseBdev3", 00:15:38.809 "uuid": "9be4660d-c1e1-4bfa-9f5a-17db86dc18e3", 00:15:38.809 "is_configured": true, 00:15:38.809 "data_offset": 2048, 00:15:38.809 "data_size": 63488 00:15:38.809 } 00:15:38.809 ] 00:15:38.809 }' 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.809 10:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.377 [2024-11-15 10:44:00.414048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.377 "name": "Existed_Raid", 00:15:39.377 "uuid": "ce1b4361-7281-43cf-a592-c780a4e198cb", 00:15:39.377 "strip_size_kb": 64, 00:15:39.377 "state": "configuring", 00:15:39.377 "raid_level": "raid5f", 00:15:39.377 "superblock": true, 00:15:39.377 "num_base_bdevs": 3, 00:15:39.377 "num_base_bdevs_discovered": 1, 00:15:39.377 "num_base_bdevs_operational": 3, 00:15:39.377 "base_bdevs_list": [ 00:15:39.377 { 00:15:39.377 "name": "BaseBdev1", 00:15:39.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.377 "is_configured": false, 00:15:39.377 "data_offset": 0, 00:15:39.377 "data_size": 0 00:15:39.377 }, 00:15:39.377 { 00:15:39.377 "name": null, 00:15:39.377 "uuid": "5fa8b6f7-928c-45c8-aef0-c14871f746c6", 00:15:39.377 "is_configured": false, 00:15:39.377 "data_offset": 0, 00:15:39.377 "data_size": 63488 00:15:39.377 }, 00:15:39.377 { 00:15:39.377 "name": "BaseBdev3", 00:15:39.377 "uuid": "9be4660d-c1e1-4bfa-9f5a-17db86dc18e3", 00:15:39.377 "is_configured": true, 00:15:39.377 "data_offset": 2048, 00:15:39.377 "data_size": 63488 00:15:39.377 } 00:15:39.377 ] 00:15:39.377 }' 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.377 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.944 [2024-11-15 10:44:00.987550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.944 BaseBdev1 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.944 10:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.944 [ 00:15:39.944 { 00:15:39.944 "name": "BaseBdev1", 00:15:39.944 "aliases": [ 00:15:39.944 "4579edf1-550c-474b-ac5e-ded7b2926ba3" 00:15:39.944 ], 00:15:39.944 "product_name": "Malloc disk", 00:15:39.944 "block_size": 512, 00:15:39.944 "num_blocks": 65536, 00:15:39.944 "uuid": "4579edf1-550c-474b-ac5e-ded7b2926ba3", 00:15:39.944 "assigned_rate_limits": { 00:15:39.944 "rw_ios_per_sec": 0, 00:15:39.944 "rw_mbytes_per_sec": 0, 00:15:39.944 "r_mbytes_per_sec": 0, 00:15:39.944 "w_mbytes_per_sec": 0 00:15:39.944 }, 00:15:39.944 "claimed": true, 00:15:39.944 "claim_type": "exclusive_write", 00:15:39.944 "zoned": false, 00:15:39.944 "supported_io_types": { 00:15:39.944 "read": true, 00:15:39.944 "write": true, 00:15:39.944 "unmap": true, 00:15:39.944 "flush": true, 00:15:39.944 "reset": true, 00:15:39.944 "nvme_admin": false, 00:15:39.944 "nvme_io": false, 00:15:39.944 "nvme_io_md": false, 00:15:39.944 "write_zeroes": true, 00:15:39.944 "zcopy": true, 00:15:39.944 "get_zone_info": false, 00:15:39.944 "zone_management": false, 00:15:39.944 "zone_append": false, 00:15:39.944 "compare": false, 00:15:39.944 "compare_and_write": false, 00:15:39.944 "abort": true, 00:15:39.944 "seek_hole": false, 00:15:39.944 "seek_data": false, 00:15:39.944 "copy": true, 00:15:39.944 "nvme_iov_md": false 00:15:39.944 }, 00:15:39.944 "memory_domains": [ 00:15:39.944 { 00:15:39.944 "dma_device_id": "system", 00:15:39.944 "dma_device_type": 1 00:15:39.944 }, 00:15:39.944 { 00:15:39.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.944 "dma_device_type": 2 00:15:39.944 } 00:15:39.944 ], 00:15:39.944 "driver_specific": {} 00:15:39.944 } 00:15:39.944 ] 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.944 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.944 "name": "Existed_Raid", 00:15:39.944 "uuid": "ce1b4361-7281-43cf-a592-c780a4e198cb", 00:15:39.944 "strip_size_kb": 64, 00:15:39.944 "state": "configuring", 00:15:39.945 "raid_level": "raid5f", 00:15:39.945 "superblock": true, 00:15:39.945 "num_base_bdevs": 3, 00:15:39.945 "num_base_bdevs_discovered": 2, 00:15:39.945 "num_base_bdevs_operational": 3, 00:15:39.945 "base_bdevs_list": [ 00:15:39.945 { 00:15:39.945 "name": "BaseBdev1", 00:15:39.945 "uuid": "4579edf1-550c-474b-ac5e-ded7b2926ba3", 00:15:39.945 "is_configured": true, 00:15:39.945 "data_offset": 2048, 00:15:39.945 "data_size": 63488 00:15:39.945 }, 00:15:39.945 { 00:15:39.945 "name": null, 00:15:39.945 "uuid": "5fa8b6f7-928c-45c8-aef0-c14871f746c6", 00:15:39.945 "is_configured": false, 00:15:39.945 "data_offset": 0, 00:15:39.945 "data_size": 63488 00:15:39.945 }, 00:15:39.945 { 00:15:39.945 "name": "BaseBdev3", 00:15:39.945 "uuid": "9be4660d-c1e1-4bfa-9f5a-17db86dc18e3", 00:15:39.945 "is_configured": true, 00:15:39.945 "data_offset": 2048, 00:15:39.945 "data_size": 63488 00:15:39.945 } 00:15:39.945 ] 00:15:39.945 }' 00:15:39.945 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.945 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.510 [2024-11-15 10:44:01.535784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.510 "name": "Existed_Raid", 00:15:40.510 "uuid": "ce1b4361-7281-43cf-a592-c780a4e198cb", 00:15:40.510 "strip_size_kb": 64, 00:15:40.510 "state": "configuring", 00:15:40.510 "raid_level": "raid5f", 00:15:40.510 "superblock": true, 00:15:40.510 "num_base_bdevs": 3, 00:15:40.510 "num_base_bdevs_discovered": 1, 00:15:40.510 "num_base_bdevs_operational": 3, 00:15:40.510 "base_bdevs_list": [ 00:15:40.510 { 00:15:40.510 "name": "BaseBdev1", 00:15:40.510 "uuid": "4579edf1-550c-474b-ac5e-ded7b2926ba3", 00:15:40.510 "is_configured": true, 00:15:40.510 "data_offset": 2048, 00:15:40.510 "data_size": 63488 00:15:40.510 }, 00:15:40.510 { 00:15:40.510 "name": null, 00:15:40.510 "uuid": "5fa8b6f7-928c-45c8-aef0-c14871f746c6", 00:15:40.510 "is_configured": false, 00:15:40.510 "data_offset": 0, 00:15:40.510 "data_size": 63488 00:15:40.510 }, 00:15:40.510 { 00:15:40.510 "name": null, 00:15:40.510 "uuid": "9be4660d-c1e1-4bfa-9f5a-17db86dc18e3", 00:15:40.510 "is_configured": false, 00:15:40.510 "data_offset": 0, 00:15:40.510 "data_size": 63488 00:15:40.510 } 00:15:40.510 ] 00:15:40.510 }' 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.510 10:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.078 [2024-11-15 10:44:02.103985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.078 "name": "Existed_Raid", 00:15:41.078 "uuid": "ce1b4361-7281-43cf-a592-c780a4e198cb", 00:15:41.078 "strip_size_kb": 64, 00:15:41.078 "state": "configuring", 00:15:41.078 "raid_level": "raid5f", 00:15:41.078 "superblock": true, 00:15:41.078 "num_base_bdevs": 3, 00:15:41.078 "num_base_bdevs_discovered": 2, 00:15:41.078 "num_base_bdevs_operational": 3, 00:15:41.078 "base_bdevs_list": [ 00:15:41.078 { 00:15:41.078 "name": "BaseBdev1", 00:15:41.078 "uuid": "4579edf1-550c-474b-ac5e-ded7b2926ba3", 00:15:41.078 "is_configured": true, 00:15:41.078 "data_offset": 2048, 00:15:41.078 "data_size": 63488 00:15:41.078 }, 00:15:41.078 { 00:15:41.078 "name": null, 00:15:41.078 "uuid": "5fa8b6f7-928c-45c8-aef0-c14871f746c6", 00:15:41.078 "is_configured": false, 00:15:41.078 "data_offset": 0, 00:15:41.078 "data_size": 63488 00:15:41.078 }, 00:15:41.078 { 00:15:41.078 "name": "BaseBdev3", 00:15:41.078 "uuid": "9be4660d-c1e1-4bfa-9f5a-17db86dc18e3", 00:15:41.078 "is_configured": true, 00:15:41.078 "data_offset": 2048, 00:15:41.078 "data_size": 63488 00:15:41.078 } 00:15:41.078 ] 00:15:41.078 }' 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.078 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.646 [2024-11-15 10:44:02.668272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.646 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.904 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.904 "name": "Existed_Raid", 00:15:41.904 "uuid": "ce1b4361-7281-43cf-a592-c780a4e198cb", 00:15:41.904 "strip_size_kb": 64, 00:15:41.904 "state": "configuring", 00:15:41.904 "raid_level": "raid5f", 00:15:41.904 "superblock": true, 00:15:41.904 "num_base_bdevs": 3, 00:15:41.904 "num_base_bdevs_discovered": 1, 00:15:41.904 "num_base_bdevs_operational": 3, 00:15:41.904 "base_bdevs_list": [ 00:15:41.904 { 00:15:41.904 "name": null, 00:15:41.904 "uuid": "4579edf1-550c-474b-ac5e-ded7b2926ba3", 00:15:41.904 "is_configured": false, 00:15:41.904 "data_offset": 0, 00:15:41.904 "data_size": 63488 00:15:41.904 }, 00:15:41.904 { 00:15:41.904 "name": null, 00:15:41.904 "uuid": "5fa8b6f7-928c-45c8-aef0-c14871f746c6", 00:15:41.904 "is_configured": false, 00:15:41.904 "data_offset": 0, 00:15:41.904 "data_size": 63488 00:15:41.904 }, 00:15:41.904 { 00:15:41.904 "name": "BaseBdev3", 00:15:41.904 "uuid": "9be4660d-c1e1-4bfa-9f5a-17db86dc18e3", 00:15:41.904 "is_configured": true, 00:15:41.904 "data_offset": 2048, 00:15:41.904 "data_size": 63488 00:15:41.904 } 00:15:41.904 ] 00:15:41.904 }' 00:15:41.904 10:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.904 10:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.163 [2024-11-15 10:44:03.308425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.163 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.421 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.421 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.421 "name": "Existed_Raid", 00:15:42.421 "uuid": "ce1b4361-7281-43cf-a592-c780a4e198cb", 00:15:42.421 "strip_size_kb": 64, 00:15:42.421 "state": "configuring", 00:15:42.421 "raid_level": "raid5f", 00:15:42.421 "superblock": true, 00:15:42.421 "num_base_bdevs": 3, 00:15:42.421 "num_base_bdevs_discovered": 2, 00:15:42.421 "num_base_bdevs_operational": 3, 00:15:42.421 "base_bdevs_list": [ 00:15:42.421 { 00:15:42.421 "name": null, 00:15:42.421 "uuid": "4579edf1-550c-474b-ac5e-ded7b2926ba3", 00:15:42.421 "is_configured": false, 00:15:42.421 "data_offset": 0, 00:15:42.422 "data_size": 63488 00:15:42.422 }, 00:15:42.422 { 00:15:42.422 "name": "BaseBdev2", 00:15:42.422 "uuid": "5fa8b6f7-928c-45c8-aef0-c14871f746c6", 00:15:42.422 "is_configured": true, 00:15:42.422 "data_offset": 2048, 00:15:42.422 "data_size": 63488 00:15:42.422 }, 00:15:42.422 { 00:15:42.422 "name": "BaseBdev3", 00:15:42.422 "uuid": "9be4660d-c1e1-4bfa-9f5a-17db86dc18e3", 00:15:42.422 "is_configured": true, 00:15:42.422 "data_offset": 2048, 00:15:42.422 "data_size": 63488 00:15:42.422 } 00:15:42.422 ] 00:15:42.422 }' 00:15:42.422 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.422 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.057 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.057 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.057 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.057 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:43.057 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.057 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:43.057 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.057 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:43.057 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.057 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.057 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4579edf1-550c-474b-ac5e-ded7b2926ba3 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.058 [2024-11-15 10:44:03.978412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:43.058 NewBaseBdev 00:15:43.058 [2024-11-15 10:44:03.978913] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:43.058 [2024-11-15 10:44:03.978945] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:43.058 [2024-11-15 10:44:03.979255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.058 [2024-11-15 10:44:03.984343] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:43.058 [2024-11-15 10:44:03.984483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:43.058 [2024-11-15 10:44:03.984974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.058 10:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.058 [ 00:15:43.058 { 00:15:43.058 "name": "NewBaseBdev", 00:15:43.058 "aliases": [ 00:15:43.058 "4579edf1-550c-474b-ac5e-ded7b2926ba3" 00:15:43.058 ], 00:15:43.058 "product_name": "Malloc disk", 00:15:43.058 "block_size": 512, 00:15:43.058 "num_blocks": 65536, 00:15:43.058 "uuid": "4579edf1-550c-474b-ac5e-ded7b2926ba3", 00:15:43.058 "assigned_rate_limits": { 00:15:43.058 "rw_ios_per_sec": 0, 00:15:43.058 "rw_mbytes_per_sec": 0, 00:15:43.058 "r_mbytes_per_sec": 0, 00:15:43.058 "w_mbytes_per_sec": 0 00:15:43.058 }, 00:15:43.058 "claimed": true, 00:15:43.058 "claim_type": "exclusive_write", 00:15:43.058 "zoned": false, 00:15:43.058 "supported_io_types": { 00:15:43.058 "read": true, 00:15:43.058 "write": true, 00:15:43.058 "unmap": true, 00:15:43.058 "flush": true, 00:15:43.058 "reset": true, 00:15:43.058 "nvme_admin": false, 00:15:43.058 "nvme_io": false, 00:15:43.058 "nvme_io_md": false, 00:15:43.058 "write_zeroes": true, 00:15:43.058 "zcopy": true, 00:15:43.058 "get_zone_info": false, 00:15:43.058 "zone_management": false, 00:15:43.058 "zone_append": false, 00:15:43.058 "compare": false, 00:15:43.058 "compare_and_write": false, 00:15:43.058 "abort": true, 00:15:43.058 "seek_hole": false, 00:15:43.058 "seek_data": false, 00:15:43.058 "copy": true, 00:15:43.058 "nvme_iov_md": false 00:15:43.058 }, 00:15:43.058 "memory_domains": [ 00:15:43.058 { 00:15:43.058 "dma_device_id": "system", 00:15:43.058 "dma_device_type": 1 00:15:43.058 }, 00:15:43.058 { 00:15:43.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.058 "dma_device_type": 2 00:15:43.058 } 00:15:43.058 ], 00:15:43.058 "driver_specific": {} 00:15:43.058 } 00:15:43.058 ] 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.058 "name": "Existed_Raid", 00:15:43.058 "uuid": "ce1b4361-7281-43cf-a592-c780a4e198cb", 00:15:43.058 "strip_size_kb": 64, 00:15:43.058 "state": "online", 00:15:43.058 "raid_level": "raid5f", 00:15:43.058 "superblock": true, 00:15:43.058 "num_base_bdevs": 3, 00:15:43.058 "num_base_bdevs_discovered": 3, 00:15:43.058 "num_base_bdevs_operational": 3, 00:15:43.058 "base_bdevs_list": [ 00:15:43.058 { 00:15:43.058 "name": "NewBaseBdev", 00:15:43.058 "uuid": "4579edf1-550c-474b-ac5e-ded7b2926ba3", 00:15:43.058 "is_configured": true, 00:15:43.058 "data_offset": 2048, 00:15:43.058 "data_size": 63488 00:15:43.058 }, 00:15:43.058 { 00:15:43.058 "name": "BaseBdev2", 00:15:43.058 "uuid": "5fa8b6f7-928c-45c8-aef0-c14871f746c6", 00:15:43.058 "is_configured": true, 00:15:43.058 "data_offset": 2048, 00:15:43.058 "data_size": 63488 00:15:43.058 }, 00:15:43.058 { 00:15:43.058 "name": "BaseBdev3", 00:15:43.058 "uuid": "9be4660d-c1e1-4bfa-9f5a-17db86dc18e3", 00:15:43.058 "is_configured": true, 00:15:43.058 "data_offset": 2048, 00:15:43.058 "data_size": 63488 00:15:43.058 } 00:15:43.058 ] 00:15:43.058 }' 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.058 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.651 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:43.651 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:43.651 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:43.651 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:43.651 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:43.651 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:43.651 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:43.651 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:43.652 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.652 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.652 [2024-11-15 10:44:04.515055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.652 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.652 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:43.652 "name": "Existed_Raid", 00:15:43.652 "aliases": [ 00:15:43.652 "ce1b4361-7281-43cf-a592-c780a4e198cb" 00:15:43.652 ], 00:15:43.652 "product_name": "Raid Volume", 00:15:43.652 "block_size": 512, 00:15:43.652 "num_blocks": 126976, 00:15:43.652 "uuid": "ce1b4361-7281-43cf-a592-c780a4e198cb", 00:15:43.652 "assigned_rate_limits": { 00:15:43.652 "rw_ios_per_sec": 0, 00:15:43.652 "rw_mbytes_per_sec": 0, 00:15:43.652 "r_mbytes_per_sec": 0, 00:15:43.652 "w_mbytes_per_sec": 0 00:15:43.652 }, 00:15:43.652 "claimed": false, 00:15:43.652 "zoned": false, 00:15:43.652 "supported_io_types": { 00:15:43.652 "read": true, 00:15:43.652 "write": true, 00:15:43.652 "unmap": false, 00:15:43.652 "flush": false, 00:15:43.652 "reset": true, 00:15:43.652 "nvme_admin": false, 00:15:43.652 "nvme_io": false, 00:15:43.652 "nvme_io_md": false, 00:15:43.652 "write_zeroes": true, 00:15:43.652 "zcopy": false, 00:15:43.652 "get_zone_info": false, 00:15:43.652 "zone_management": false, 00:15:43.652 "zone_append": false, 00:15:43.652 "compare": false, 00:15:43.652 "compare_and_write": false, 00:15:43.652 "abort": false, 00:15:43.652 "seek_hole": false, 00:15:43.652 "seek_data": false, 00:15:43.653 "copy": false, 00:15:43.653 "nvme_iov_md": false 00:15:43.653 }, 00:15:43.653 "driver_specific": { 00:15:43.653 "raid": { 00:15:43.653 "uuid": "ce1b4361-7281-43cf-a592-c780a4e198cb", 00:15:43.653 "strip_size_kb": 64, 00:15:43.653 "state": "online", 00:15:43.653 "raid_level": "raid5f", 00:15:43.653 "superblock": true, 00:15:43.653 "num_base_bdevs": 3, 00:15:43.653 "num_base_bdevs_discovered": 3, 00:15:43.653 "num_base_bdevs_operational": 3, 00:15:43.653 "base_bdevs_list": [ 00:15:43.653 { 00:15:43.653 "name": "NewBaseBdev", 00:15:43.653 "uuid": "4579edf1-550c-474b-ac5e-ded7b2926ba3", 00:15:43.653 "is_configured": true, 00:15:43.653 "data_offset": 2048, 00:15:43.653 "data_size": 63488 00:15:43.653 }, 00:15:43.653 { 00:15:43.653 "name": "BaseBdev2", 00:15:43.653 "uuid": "5fa8b6f7-928c-45c8-aef0-c14871f746c6", 00:15:43.653 "is_configured": true, 00:15:43.653 "data_offset": 2048, 00:15:43.653 "data_size": 63488 00:15:43.653 }, 00:15:43.653 { 00:15:43.653 "name": "BaseBdev3", 00:15:43.653 "uuid": "9be4660d-c1e1-4bfa-9f5a-17db86dc18e3", 00:15:43.653 "is_configured": true, 00:15:43.653 "data_offset": 2048, 00:15:43.653 "data_size": 63488 00:15:43.653 } 00:15:43.653 ] 00:15:43.653 } 00:15:43.653 } 00:15:43.653 }' 00:15:43.653 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:43.653 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:43.653 BaseBdev2 00:15:43.653 BaseBdev3' 00:15:43.653 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.653 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:43.653 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.653 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.653 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:43.653 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.653 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.653 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.653 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.653 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.653 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.653 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:43.654 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.654 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.654 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.654 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.654 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.654 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.654 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.654 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:43.654 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.654 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.654 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.654 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.917 [2024-11-15 10:44:04.830898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.917 [2024-11-15 10:44:04.831049] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.917 [2024-11-15 10:44:04.831252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.917 [2024-11-15 10:44:04.831643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.917 [2024-11-15 10:44:04.831668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80823 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80823 ']' 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80823 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80823 00:15:43.917 killing process with pid 80823 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80823' 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80823 00:15:43.917 [2024-11-15 10:44:04.869647] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.917 10:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80823 00:15:44.175 [2024-11-15 10:44:05.140783] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:45.110 10:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:45.110 00:15:45.110 real 0m11.525s 00:15:45.110 user 0m19.108s 00:15:45.110 sys 0m1.664s 00:15:45.110 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.110 ************************************ 00:15:45.110 END TEST raid5f_state_function_test_sb 00:15:45.110 ************************************ 00:15:45.110 10:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.110 10:44:06 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:45.110 10:44:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:45.110 10:44:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.110 10:44:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:45.110 ************************************ 00:15:45.110 START TEST raid5f_superblock_test 00:15:45.110 ************************************ 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81445 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81445 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81445 ']' 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.110 10:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.369 [2024-11-15 10:44:06.301502] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:15:45.369 [2024-11-15 10:44:06.301862] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81445 ] 00:15:45.369 [2024-11-15 10:44:06.476617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.627 [2024-11-15 10:44:06.605207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.886 [2024-11-15 10:44:06.808970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.886 [2024-11-15 10:44:06.809027] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.452 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.452 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:46.452 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:46.452 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:46.452 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:46.452 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:46.452 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:46.452 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:46.452 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:46.452 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:46.452 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:46.452 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.452 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.452 malloc1 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 [2024-11-15 10:44:07.385697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:46.453 [2024-11-15 10:44:07.385907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.453 [2024-11-15 10:44:07.386056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:46.453 [2024-11-15 10:44:07.386176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.453 [2024-11-15 10:44:07.388934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.453 [2024-11-15 10:44:07.389097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:46.453 pt1 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 malloc2 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 [2024-11-15 10:44:07.442811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:46.453 [2024-11-15 10:44:07.442911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.453 [2024-11-15 10:44:07.442944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:46.453 [2024-11-15 10:44:07.442959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.453 [2024-11-15 10:44:07.445780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.453 [2024-11-15 10:44:07.445970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:46.453 pt2 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 malloc3 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 [2024-11-15 10:44:07.510081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:46.453 [2024-11-15 10:44:07.510269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.453 [2024-11-15 10:44:07.510344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:46.453 [2024-11-15 10:44:07.510469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.453 [2024-11-15 10:44:07.513313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.453 [2024-11-15 10:44:07.513463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:46.453 pt3 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 [2024-11-15 10:44:07.522191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:46.453 [2024-11-15 10:44:07.524820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.453 [2024-11-15 10:44:07.525028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:46.453 [2024-11-15 10:44:07.525271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:46.453 [2024-11-15 10:44:07.525301] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:46.453 [2024-11-15 10:44:07.525635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:46.453 [2024-11-15 10:44:07.530809] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:46.453 [2024-11-15 10:44:07.530943] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:46.453 [2024-11-15 10:44:07.531199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.453 "name": "raid_bdev1", 00:15:46.453 "uuid": "933980a6-9747-41ba-b1ab-84d3fe5e101b", 00:15:46.453 "strip_size_kb": 64, 00:15:46.453 "state": "online", 00:15:46.453 "raid_level": "raid5f", 00:15:46.453 "superblock": true, 00:15:46.453 "num_base_bdevs": 3, 00:15:46.453 "num_base_bdevs_discovered": 3, 00:15:46.453 "num_base_bdevs_operational": 3, 00:15:46.453 "base_bdevs_list": [ 00:15:46.453 { 00:15:46.453 "name": "pt1", 00:15:46.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:46.453 "is_configured": true, 00:15:46.453 "data_offset": 2048, 00:15:46.453 "data_size": 63488 00:15:46.453 }, 00:15:46.453 { 00:15:46.453 "name": "pt2", 00:15:46.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.453 "is_configured": true, 00:15:46.453 "data_offset": 2048, 00:15:46.453 "data_size": 63488 00:15:46.453 }, 00:15:46.453 { 00:15:46.453 "name": "pt3", 00:15:46.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.453 "is_configured": true, 00:15:46.453 "data_offset": 2048, 00:15:46.453 "data_size": 63488 00:15:46.453 } 00:15:46.453 ] 00:15:46.453 }' 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.453 10:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:47.020 [2024-11-15 10:44:08.045210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:47.020 "name": "raid_bdev1", 00:15:47.020 "aliases": [ 00:15:47.020 "933980a6-9747-41ba-b1ab-84d3fe5e101b" 00:15:47.020 ], 00:15:47.020 "product_name": "Raid Volume", 00:15:47.020 "block_size": 512, 00:15:47.020 "num_blocks": 126976, 00:15:47.020 "uuid": "933980a6-9747-41ba-b1ab-84d3fe5e101b", 00:15:47.020 "assigned_rate_limits": { 00:15:47.020 "rw_ios_per_sec": 0, 00:15:47.020 "rw_mbytes_per_sec": 0, 00:15:47.020 "r_mbytes_per_sec": 0, 00:15:47.020 "w_mbytes_per_sec": 0 00:15:47.020 }, 00:15:47.020 "claimed": false, 00:15:47.020 "zoned": false, 00:15:47.020 "supported_io_types": { 00:15:47.020 "read": true, 00:15:47.020 "write": true, 00:15:47.020 "unmap": false, 00:15:47.020 "flush": false, 00:15:47.020 "reset": true, 00:15:47.020 "nvme_admin": false, 00:15:47.020 "nvme_io": false, 00:15:47.020 "nvme_io_md": false, 00:15:47.020 "write_zeroes": true, 00:15:47.020 "zcopy": false, 00:15:47.020 "get_zone_info": false, 00:15:47.020 "zone_management": false, 00:15:47.020 "zone_append": false, 00:15:47.020 "compare": false, 00:15:47.020 "compare_and_write": false, 00:15:47.020 "abort": false, 00:15:47.020 "seek_hole": false, 00:15:47.020 "seek_data": false, 00:15:47.020 "copy": false, 00:15:47.020 "nvme_iov_md": false 00:15:47.020 }, 00:15:47.020 "driver_specific": { 00:15:47.020 "raid": { 00:15:47.020 "uuid": "933980a6-9747-41ba-b1ab-84d3fe5e101b", 00:15:47.020 "strip_size_kb": 64, 00:15:47.020 "state": "online", 00:15:47.020 "raid_level": "raid5f", 00:15:47.020 "superblock": true, 00:15:47.020 "num_base_bdevs": 3, 00:15:47.020 "num_base_bdevs_discovered": 3, 00:15:47.020 "num_base_bdevs_operational": 3, 00:15:47.020 "base_bdevs_list": [ 00:15:47.020 { 00:15:47.020 "name": "pt1", 00:15:47.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.020 "is_configured": true, 00:15:47.020 "data_offset": 2048, 00:15:47.020 "data_size": 63488 00:15:47.020 }, 00:15:47.020 { 00:15:47.020 "name": "pt2", 00:15:47.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.020 "is_configured": true, 00:15:47.020 "data_offset": 2048, 00:15:47.020 "data_size": 63488 00:15:47.020 }, 00:15:47.020 { 00:15:47.020 "name": "pt3", 00:15:47.020 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.020 "is_configured": true, 00:15:47.020 "data_offset": 2048, 00:15:47.020 "data_size": 63488 00:15:47.020 } 00:15:47.020 ] 00:15:47.020 } 00:15:47.020 } 00:15:47.020 }' 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:47.020 pt2 00:15:47.020 pt3' 00:15:47.020 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:47.279 [2024-11-15 10:44:08.357228] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=933980a6-9747-41ba-b1ab-84d3fe5e101b 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 933980a6-9747-41ba-b1ab-84d3fe5e101b ']' 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.279 [2024-11-15 10:44:08.404984] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.279 [2024-11-15 10:44:08.405141] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.279 [2024-11-15 10:44:08.405334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.279 [2024-11-15 10:44:08.405564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.279 [2024-11-15 10:44:08.405592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.279 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.604 [2024-11-15 10:44:08.545079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:47.604 [2024-11-15 10:44:08.547669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:47.604 [2024-11-15 10:44:08.547743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:47.604 [2024-11-15 10:44:08.547816] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:47.604 [2024-11-15 10:44:08.547886] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:47.604 [2024-11-15 10:44:08.547920] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:47.604 [2024-11-15 10:44:08.547946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.604 [2024-11-15 10:44:08.547959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:47.604 request: 00:15:47.604 { 00:15:47.604 "name": "raid_bdev1", 00:15:47.604 "raid_level": "raid5f", 00:15:47.604 "base_bdevs": [ 00:15:47.604 "malloc1", 00:15:47.604 "malloc2", 00:15:47.604 "malloc3" 00:15:47.604 ], 00:15:47.604 "strip_size_kb": 64, 00:15:47.604 "superblock": false, 00:15:47.604 "method": "bdev_raid_create", 00:15:47.604 "req_id": 1 00:15:47.604 } 00:15:47.604 Got JSON-RPC error response 00:15:47.604 response: 00:15:47.604 { 00:15:47.604 "code": -17, 00:15:47.604 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:47.604 } 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.604 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.604 [2024-11-15 10:44:08.613045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:47.604 [2024-11-15 10:44:08.613229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.605 [2024-11-15 10:44:08.613306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:47.605 [2024-11-15 10:44:08.613423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.605 [2024-11-15 10:44:08.616454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.605 [2024-11-15 10:44:08.616644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:47.605 [2024-11-15 10:44:08.616888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:47.605 [2024-11-15 10:44:08.617051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:47.605 pt1 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.605 "name": "raid_bdev1", 00:15:47.605 "uuid": "933980a6-9747-41ba-b1ab-84d3fe5e101b", 00:15:47.605 "strip_size_kb": 64, 00:15:47.605 "state": "configuring", 00:15:47.605 "raid_level": "raid5f", 00:15:47.605 "superblock": true, 00:15:47.605 "num_base_bdevs": 3, 00:15:47.605 "num_base_bdevs_discovered": 1, 00:15:47.605 "num_base_bdevs_operational": 3, 00:15:47.605 "base_bdevs_list": [ 00:15:47.605 { 00:15:47.605 "name": "pt1", 00:15:47.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.605 "is_configured": true, 00:15:47.605 "data_offset": 2048, 00:15:47.605 "data_size": 63488 00:15:47.605 }, 00:15:47.605 { 00:15:47.605 "name": null, 00:15:47.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.605 "is_configured": false, 00:15:47.605 "data_offset": 2048, 00:15:47.605 "data_size": 63488 00:15:47.605 }, 00:15:47.605 { 00:15:47.605 "name": null, 00:15:47.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.605 "is_configured": false, 00:15:47.605 "data_offset": 2048, 00:15:47.605 "data_size": 63488 00:15:47.605 } 00:15:47.605 ] 00:15:47.605 }' 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.605 10:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.171 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:48.171 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.171 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.171 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.171 [2024-11-15 10:44:09.145602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.171 [2024-11-15 10:44:09.145827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.172 [2024-11-15 10:44:09.145981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:48.172 [2024-11-15 10:44:09.146008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.172 [2024-11-15 10:44:09.146623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.172 [2024-11-15 10:44:09.146670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.172 [2024-11-15 10:44:09.146780] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:48.172 [2024-11-15 10:44:09.146813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.172 pt2 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.172 [2024-11-15 10:44:09.153609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.172 "name": "raid_bdev1", 00:15:48.172 "uuid": "933980a6-9747-41ba-b1ab-84d3fe5e101b", 00:15:48.172 "strip_size_kb": 64, 00:15:48.172 "state": "configuring", 00:15:48.172 "raid_level": "raid5f", 00:15:48.172 "superblock": true, 00:15:48.172 "num_base_bdevs": 3, 00:15:48.172 "num_base_bdevs_discovered": 1, 00:15:48.172 "num_base_bdevs_operational": 3, 00:15:48.172 "base_bdevs_list": [ 00:15:48.172 { 00:15:48.172 "name": "pt1", 00:15:48.172 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.172 "is_configured": true, 00:15:48.172 "data_offset": 2048, 00:15:48.172 "data_size": 63488 00:15:48.172 }, 00:15:48.172 { 00:15:48.172 "name": null, 00:15:48.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.172 "is_configured": false, 00:15:48.172 "data_offset": 0, 00:15:48.172 "data_size": 63488 00:15:48.172 }, 00:15:48.172 { 00:15:48.172 "name": null, 00:15:48.172 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.172 "is_configured": false, 00:15:48.172 "data_offset": 2048, 00:15:48.172 "data_size": 63488 00:15:48.172 } 00:15:48.172 ] 00:15:48.172 }' 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.172 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.739 [2024-11-15 10:44:09.661726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.739 [2024-11-15 10:44:09.661946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.739 [2024-11-15 10:44:09.662117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:48.739 [2024-11-15 10:44:09.662266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.739 [2024-11-15 10:44:09.662981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.739 [2024-11-15 10:44:09.663029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.739 [2024-11-15 10:44:09.663132] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:48.739 [2024-11-15 10:44:09.663170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.739 pt2 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.739 [2024-11-15 10:44:09.673700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:48.739 [2024-11-15 10:44:09.673756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.739 [2024-11-15 10:44:09.673779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:48.739 [2024-11-15 10:44:09.673795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.739 [2024-11-15 10:44:09.674238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.739 [2024-11-15 10:44:09.674275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:48.739 [2024-11-15 10:44:09.674348] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:48.739 [2024-11-15 10:44:09.674378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:48.739 [2024-11-15 10:44:09.674572] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:48.739 [2024-11-15 10:44:09.674595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:48.739 [2024-11-15 10:44:09.674914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:48.739 pt3 00:15:48.739 [2024-11-15 10:44:09.679969] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:48.739 [2024-11-15 10:44:09.679994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:48.739 [2024-11-15 10:44:09.680225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.739 "name": "raid_bdev1", 00:15:48.739 "uuid": "933980a6-9747-41ba-b1ab-84d3fe5e101b", 00:15:48.739 "strip_size_kb": 64, 00:15:48.739 "state": "online", 00:15:48.739 "raid_level": "raid5f", 00:15:48.739 "superblock": true, 00:15:48.739 "num_base_bdevs": 3, 00:15:48.739 "num_base_bdevs_discovered": 3, 00:15:48.739 "num_base_bdevs_operational": 3, 00:15:48.739 "base_bdevs_list": [ 00:15:48.739 { 00:15:48.739 "name": "pt1", 00:15:48.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.739 "is_configured": true, 00:15:48.739 "data_offset": 2048, 00:15:48.739 "data_size": 63488 00:15:48.739 }, 00:15:48.739 { 00:15:48.739 "name": "pt2", 00:15:48.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.739 "is_configured": true, 00:15:48.739 "data_offset": 2048, 00:15:48.739 "data_size": 63488 00:15:48.739 }, 00:15:48.739 { 00:15:48.739 "name": "pt3", 00:15:48.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.739 "is_configured": true, 00:15:48.739 "data_offset": 2048, 00:15:48.739 "data_size": 63488 00:15:48.739 } 00:15:48.739 ] 00:15:48.739 }' 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.739 10:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.305 [2024-11-15 10:44:10.198312] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:49.305 "name": "raid_bdev1", 00:15:49.305 "aliases": [ 00:15:49.305 "933980a6-9747-41ba-b1ab-84d3fe5e101b" 00:15:49.305 ], 00:15:49.305 "product_name": "Raid Volume", 00:15:49.305 "block_size": 512, 00:15:49.305 "num_blocks": 126976, 00:15:49.305 "uuid": "933980a6-9747-41ba-b1ab-84d3fe5e101b", 00:15:49.305 "assigned_rate_limits": { 00:15:49.305 "rw_ios_per_sec": 0, 00:15:49.305 "rw_mbytes_per_sec": 0, 00:15:49.305 "r_mbytes_per_sec": 0, 00:15:49.305 "w_mbytes_per_sec": 0 00:15:49.305 }, 00:15:49.305 "claimed": false, 00:15:49.305 "zoned": false, 00:15:49.305 "supported_io_types": { 00:15:49.305 "read": true, 00:15:49.305 "write": true, 00:15:49.305 "unmap": false, 00:15:49.305 "flush": false, 00:15:49.305 "reset": true, 00:15:49.305 "nvme_admin": false, 00:15:49.305 "nvme_io": false, 00:15:49.305 "nvme_io_md": false, 00:15:49.305 "write_zeroes": true, 00:15:49.305 "zcopy": false, 00:15:49.305 "get_zone_info": false, 00:15:49.305 "zone_management": false, 00:15:49.305 "zone_append": false, 00:15:49.305 "compare": false, 00:15:49.305 "compare_and_write": false, 00:15:49.305 "abort": false, 00:15:49.305 "seek_hole": false, 00:15:49.305 "seek_data": false, 00:15:49.305 "copy": false, 00:15:49.305 "nvme_iov_md": false 00:15:49.305 }, 00:15:49.305 "driver_specific": { 00:15:49.305 "raid": { 00:15:49.305 "uuid": "933980a6-9747-41ba-b1ab-84d3fe5e101b", 00:15:49.305 "strip_size_kb": 64, 00:15:49.305 "state": "online", 00:15:49.305 "raid_level": "raid5f", 00:15:49.305 "superblock": true, 00:15:49.305 "num_base_bdevs": 3, 00:15:49.305 "num_base_bdevs_discovered": 3, 00:15:49.305 "num_base_bdevs_operational": 3, 00:15:49.305 "base_bdevs_list": [ 00:15:49.305 { 00:15:49.305 "name": "pt1", 00:15:49.305 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.305 "is_configured": true, 00:15:49.305 "data_offset": 2048, 00:15:49.305 "data_size": 63488 00:15:49.305 }, 00:15:49.305 { 00:15:49.305 "name": "pt2", 00:15:49.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.305 "is_configured": true, 00:15:49.305 "data_offset": 2048, 00:15:49.305 "data_size": 63488 00:15:49.305 }, 00:15:49.305 { 00:15:49.305 "name": "pt3", 00:15:49.305 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.305 "is_configured": true, 00:15:49.305 "data_offset": 2048, 00:15:49.305 "data_size": 63488 00:15:49.305 } 00:15:49.305 ] 00:15:49.305 } 00:15:49.305 } 00:15:49.305 }' 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:49.305 pt2 00:15:49.305 pt3' 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.305 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:49.563 [2024-11-15 10:44:10.534295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 933980a6-9747-41ba-b1ab-84d3fe5e101b '!=' 933980a6-9747-41ba-b1ab-84d3fe5e101b ']' 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.563 [2024-11-15 10:44:10.582192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.563 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.563 "name": "raid_bdev1", 00:15:49.563 "uuid": "933980a6-9747-41ba-b1ab-84d3fe5e101b", 00:15:49.563 "strip_size_kb": 64, 00:15:49.563 "state": "online", 00:15:49.563 "raid_level": "raid5f", 00:15:49.563 "superblock": true, 00:15:49.563 "num_base_bdevs": 3, 00:15:49.563 "num_base_bdevs_discovered": 2, 00:15:49.563 "num_base_bdevs_operational": 2, 00:15:49.563 "base_bdevs_list": [ 00:15:49.563 { 00:15:49.563 "name": null, 00:15:49.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.563 "is_configured": false, 00:15:49.564 "data_offset": 0, 00:15:49.564 "data_size": 63488 00:15:49.564 }, 00:15:49.564 { 00:15:49.564 "name": "pt2", 00:15:49.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.564 "is_configured": true, 00:15:49.564 "data_offset": 2048, 00:15:49.564 "data_size": 63488 00:15:49.564 }, 00:15:49.564 { 00:15:49.564 "name": "pt3", 00:15:49.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.564 "is_configured": true, 00:15:49.564 "data_offset": 2048, 00:15:49.564 "data_size": 63488 00:15:49.564 } 00:15:49.564 ] 00:15:49.564 }' 00:15:49.564 10:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.564 10:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.131 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.131 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.131 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.131 [2024-11-15 10:44:11.082266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.132 [2024-11-15 10:44:11.082454] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.132 [2024-11-15 10:44:11.082588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.132 [2024-11-15 10:44:11.082670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.132 [2024-11-15 10:44:11.082693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.132 [2024-11-15 10:44:11.158248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.132 [2024-11-15 10:44:11.158432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.132 [2024-11-15 10:44:11.158466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:50.132 [2024-11-15 10:44:11.158485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.132 [2024-11-15 10:44:11.161789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.132 [2024-11-15 10:44:11.161854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.132 [2024-11-15 10:44:11.162000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:50.132 [2024-11-15 10:44:11.162089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.132 pt2 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.132 "name": "raid_bdev1", 00:15:50.132 "uuid": "933980a6-9747-41ba-b1ab-84d3fe5e101b", 00:15:50.132 "strip_size_kb": 64, 00:15:50.132 "state": "configuring", 00:15:50.132 "raid_level": "raid5f", 00:15:50.132 "superblock": true, 00:15:50.132 "num_base_bdevs": 3, 00:15:50.132 "num_base_bdevs_discovered": 1, 00:15:50.132 "num_base_bdevs_operational": 2, 00:15:50.132 "base_bdevs_list": [ 00:15:50.132 { 00:15:50.132 "name": null, 00:15:50.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.132 "is_configured": false, 00:15:50.132 "data_offset": 2048, 00:15:50.132 "data_size": 63488 00:15:50.132 }, 00:15:50.132 { 00:15:50.132 "name": "pt2", 00:15:50.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.132 "is_configured": true, 00:15:50.132 "data_offset": 2048, 00:15:50.132 "data_size": 63488 00:15:50.132 }, 00:15:50.132 { 00:15:50.132 "name": null, 00:15:50.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.132 "is_configured": false, 00:15:50.132 "data_offset": 2048, 00:15:50.132 "data_size": 63488 00:15:50.132 } 00:15:50.132 ] 00:15:50.132 }' 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.132 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.698 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:50.698 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:50.698 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:50.698 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:50.698 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.698 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.698 [2024-11-15 10:44:11.666502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:50.698 [2024-11-15 10:44:11.666714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.698 [2024-11-15 10:44:11.666793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:50.698 [2024-11-15 10:44:11.666920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.698 [2024-11-15 10:44:11.667568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.698 [2024-11-15 10:44:11.667732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:50.698 [2024-11-15 10:44:11.667852] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:50.698 [2024-11-15 10:44:11.667901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:50.698 [2024-11-15 10:44:11.668048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:50.698 [2024-11-15 10:44:11.668069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:50.698 [2024-11-15 10:44:11.668403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:50.698 [2024-11-15 10:44:11.673398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:50.698 [2024-11-15 10:44:11.673424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:50.698 pt3 00:15:50.698 [2024-11-15 10:44:11.673813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.698 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.699 "name": "raid_bdev1", 00:15:50.699 "uuid": "933980a6-9747-41ba-b1ab-84d3fe5e101b", 00:15:50.699 "strip_size_kb": 64, 00:15:50.699 "state": "online", 00:15:50.699 "raid_level": "raid5f", 00:15:50.699 "superblock": true, 00:15:50.699 "num_base_bdevs": 3, 00:15:50.699 "num_base_bdevs_discovered": 2, 00:15:50.699 "num_base_bdevs_operational": 2, 00:15:50.699 "base_bdevs_list": [ 00:15:50.699 { 00:15:50.699 "name": null, 00:15:50.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.699 "is_configured": false, 00:15:50.699 "data_offset": 2048, 00:15:50.699 "data_size": 63488 00:15:50.699 }, 00:15:50.699 { 00:15:50.699 "name": "pt2", 00:15:50.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.699 "is_configured": true, 00:15:50.699 "data_offset": 2048, 00:15:50.699 "data_size": 63488 00:15:50.699 }, 00:15:50.699 { 00:15:50.699 "name": "pt3", 00:15:50.699 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.699 "is_configured": true, 00:15:50.699 "data_offset": 2048, 00:15:50.699 "data_size": 63488 00:15:50.699 } 00:15:50.699 ] 00:15:50.699 }' 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.699 10:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.265 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.265 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.265 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.265 [2024-11-15 10:44:12.179417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.265 [2024-11-15 10:44:12.179597] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.265 [2024-11-15 10:44:12.179710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.265 [2024-11-15 10:44:12.179795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.265 [2024-11-15 10:44:12.179811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:51.265 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.265 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.265 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.265 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:51.265 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.265 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.265 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:51.265 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.266 [2024-11-15 10:44:12.251441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.266 [2024-11-15 10:44:12.251529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.266 [2024-11-15 10:44:12.251562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:51.266 [2024-11-15 10:44:12.251577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.266 [2024-11-15 10:44:12.254331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.266 [2024-11-15 10:44:12.254376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.266 [2024-11-15 10:44:12.254472] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:51.266 [2024-11-15 10:44:12.254675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:51.266 [2024-11-15 10:44:12.254893] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:51.266 [2024-11-15 10:44:12.255044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.266 [2024-11-15 10:44:12.255080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:51.266 [2024-11-15 10:44:12.255156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.266 pt1 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.266 "name": "raid_bdev1", 00:15:51.266 "uuid": "933980a6-9747-41ba-b1ab-84d3fe5e101b", 00:15:51.266 "strip_size_kb": 64, 00:15:51.266 "state": "configuring", 00:15:51.266 "raid_level": "raid5f", 00:15:51.266 "superblock": true, 00:15:51.266 "num_base_bdevs": 3, 00:15:51.266 "num_base_bdevs_discovered": 1, 00:15:51.266 "num_base_bdevs_operational": 2, 00:15:51.266 "base_bdevs_list": [ 00:15:51.266 { 00:15:51.266 "name": null, 00:15:51.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.266 "is_configured": false, 00:15:51.266 "data_offset": 2048, 00:15:51.266 "data_size": 63488 00:15:51.266 }, 00:15:51.266 { 00:15:51.266 "name": "pt2", 00:15:51.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.266 "is_configured": true, 00:15:51.266 "data_offset": 2048, 00:15:51.266 "data_size": 63488 00:15:51.266 }, 00:15:51.266 { 00:15:51.266 "name": null, 00:15:51.266 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.266 "is_configured": false, 00:15:51.266 "data_offset": 2048, 00:15:51.266 "data_size": 63488 00:15:51.266 } 00:15:51.266 ] 00:15:51.266 }' 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.266 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.833 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:51.833 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:51.833 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.834 [2024-11-15 10:44:12.839703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:51.834 [2024-11-15 10:44:12.839791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.834 [2024-11-15 10:44:12.839825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:51.834 [2024-11-15 10:44:12.839840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.834 [2024-11-15 10:44:12.840422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.834 [2024-11-15 10:44:12.840466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:51.834 [2024-11-15 10:44:12.840600] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:51.834 [2024-11-15 10:44:12.840634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:51.834 [2024-11-15 10:44:12.840801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:51.834 [2024-11-15 10:44:12.840816] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:51.834 [2024-11-15 10:44:12.841118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:51.834 [2024-11-15 10:44:12.846196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:51.834 [2024-11-15 10:44:12.846353] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:51.834 [2024-11-15 10:44:12.846794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.834 pt3 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.834 "name": "raid_bdev1", 00:15:51.834 "uuid": "933980a6-9747-41ba-b1ab-84d3fe5e101b", 00:15:51.834 "strip_size_kb": 64, 00:15:51.834 "state": "online", 00:15:51.834 "raid_level": "raid5f", 00:15:51.834 "superblock": true, 00:15:51.834 "num_base_bdevs": 3, 00:15:51.834 "num_base_bdevs_discovered": 2, 00:15:51.834 "num_base_bdevs_operational": 2, 00:15:51.834 "base_bdevs_list": [ 00:15:51.834 { 00:15:51.834 "name": null, 00:15:51.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.834 "is_configured": false, 00:15:51.834 "data_offset": 2048, 00:15:51.834 "data_size": 63488 00:15:51.834 }, 00:15:51.834 { 00:15:51.834 "name": "pt2", 00:15:51.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.834 "is_configured": true, 00:15:51.834 "data_offset": 2048, 00:15:51.834 "data_size": 63488 00:15:51.834 }, 00:15:51.834 { 00:15:51.834 "name": "pt3", 00:15:51.834 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.834 "is_configured": true, 00:15:51.834 "data_offset": 2048, 00:15:51.834 "data_size": 63488 00:15:51.834 } 00:15:51.834 ] 00:15:51.834 }' 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.834 10:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.429 [2024-11-15 10:44:13.453374] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 933980a6-9747-41ba-b1ab-84d3fe5e101b '!=' 933980a6-9747-41ba-b1ab-84d3fe5e101b ']' 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81445 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81445 ']' 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81445 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81445 00:15:52.429 killing process with pid 81445 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81445' 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81445 00:15:52.429 [2024-11-15 10:44:13.529078] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.429 10:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81445 00:15:52.429 [2024-11-15 10:44:13.529197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.429 [2024-11-15 10:44:13.529278] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.429 [2024-11-15 10:44:13.529298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:52.688 [2024-11-15 10:44:13.816324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.065 10:44:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:54.065 00:15:54.065 real 0m8.628s 00:15:54.065 user 0m14.180s 00:15:54.065 sys 0m1.125s 00:15:54.065 10:44:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.065 ************************************ 00:15:54.065 10:44:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.065 END TEST raid5f_superblock_test 00:15:54.065 ************************************ 00:15:54.065 10:44:14 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:54.065 10:44:14 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:54.065 10:44:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:54.065 10:44:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.065 10:44:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.065 ************************************ 00:15:54.065 START TEST raid5f_rebuild_test 00:15:54.065 ************************************ 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81902 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81902 00:15:54.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81902 ']' 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.065 10:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.065 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:54.065 Zero copy mechanism will not be used. 00:15:54.065 [2024-11-15 10:44:14.999734] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:15:54.065 [2024-11-15 10:44:14.999894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81902 ] 00:15:54.065 [2024-11-15 10:44:15.172747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.323 [2024-11-15 10:44:15.303387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.581 [2024-11-15 10:44:15.504425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.581 [2024-11-15 10:44:15.504481] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 BaseBdev1_malloc 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 [2024-11-15 10:44:16.115975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:55.148 [2024-11-15 10:44:16.116195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.148 [2024-11-15 10:44:16.116269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:55.148 [2024-11-15 10:44:16.116484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.148 [2024-11-15 10:44:16.119387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.148 [2024-11-15 10:44:16.119439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:55.148 BaseBdev1 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 BaseBdev2_malloc 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 [2024-11-15 10:44:16.163699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:55.148 [2024-11-15 10:44:16.163916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.148 [2024-11-15 10:44:16.164080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:55.148 [2024-11-15 10:44:16.164115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.148 [2024-11-15 10:44:16.166836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.148 [2024-11-15 10:44:16.166884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:55.148 BaseBdev2 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 BaseBdev3_malloc 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 [2024-11-15 10:44:16.221049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:55.148 [2024-11-15 10:44:16.221256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.148 [2024-11-15 10:44:16.221329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:55.148 [2024-11-15 10:44:16.221451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.148 [2024-11-15 10:44:16.224149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.148 [2024-11-15 10:44:16.224200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:55.148 BaseBdev3 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 spare_malloc 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 spare_delay 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 [2024-11-15 10:44:16.276566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:55.148 [2024-11-15 10:44:16.276764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.148 [2024-11-15 10:44:16.276802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:55.148 [2024-11-15 10:44:16.276821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.148 [2024-11-15 10:44:16.279577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.148 [2024-11-15 10:44:16.279628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:55.148 spare 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.149 [2024-11-15 10:44:16.284649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.149 [2024-11-15 10:44:16.287116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.149 [2024-11-15 10:44:16.287204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:55.149 [2024-11-15 10:44:16.287321] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:55.149 [2024-11-15 10:44:16.287338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:55.149 [2024-11-15 10:44:16.287712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:55.149 [2024-11-15 10:44:16.292848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:55.149 [2024-11-15 10:44:16.292880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:55.149 [2024-11-15 10:44:16.293111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.149 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.407 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.407 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.407 "name": "raid_bdev1", 00:15:55.407 "uuid": "eead8000-d10a-4ec8-9eb0-c65966f127cd", 00:15:55.407 "strip_size_kb": 64, 00:15:55.407 "state": "online", 00:15:55.407 "raid_level": "raid5f", 00:15:55.407 "superblock": false, 00:15:55.407 "num_base_bdevs": 3, 00:15:55.407 "num_base_bdevs_discovered": 3, 00:15:55.407 "num_base_bdevs_operational": 3, 00:15:55.407 "base_bdevs_list": [ 00:15:55.407 { 00:15:55.407 "name": "BaseBdev1", 00:15:55.407 "uuid": "72eac8c6-9f7a-5e5a-9cee-675627feebfa", 00:15:55.407 "is_configured": true, 00:15:55.407 "data_offset": 0, 00:15:55.407 "data_size": 65536 00:15:55.407 }, 00:15:55.407 { 00:15:55.407 "name": "BaseBdev2", 00:15:55.407 "uuid": "da8aadbf-7eca-5f3f-b853-25c7e22d339b", 00:15:55.407 "is_configured": true, 00:15:55.407 "data_offset": 0, 00:15:55.407 "data_size": 65536 00:15:55.407 }, 00:15:55.407 { 00:15:55.407 "name": "BaseBdev3", 00:15:55.407 "uuid": "79f6a09d-d115-5721-8cce-ddab1f6fab54", 00:15:55.407 "is_configured": true, 00:15:55.407 "data_offset": 0, 00:15:55.407 "data_size": 65536 00:15:55.407 } 00:15:55.407 ] 00:15:55.407 }' 00:15:55.407 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.407 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.665 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:55.665 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.665 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.665 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.665 [2024-11-15 10:44:16.787241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.665 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.924 10:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:56.183 [2024-11-15 10:44:17.195167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:56.183 /dev/nbd0 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.183 1+0 records in 00:15:56.183 1+0 records out 00:15:56.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325378 s, 12.6 MB/s 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:56.183 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:56.751 512+0 records in 00:15:56.751 512+0 records out 00:15:56.751 67108864 bytes (67 MB, 64 MiB) copied, 0.435034 s, 154 MB/s 00:15:56.751 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:56.751 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.751 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:56.751 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:56.751 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:56.751 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.751 10:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:57.010 [2024-11-15 10:44:17.995291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.010 [2024-11-15 10:44:18.033136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.010 "name": "raid_bdev1", 00:15:57.010 "uuid": "eead8000-d10a-4ec8-9eb0-c65966f127cd", 00:15:57.010 "strip_size_kb": 64, 00:15:57.010 "state": "online", 00:15:57.010 "raid_level": "raid5f", 00:15:57.010 "superblock": false, 00:15:57.010 "num_base_bdevs": 3, 00:15:57.010 "num_base_bdevs_discovered": 2, 00:15:57.010 "num_base_bdevs_operational": 2, 00:15:57.010 "base_bdevs_list": [ 00:15:57.010 { 00:15:57.010 "name": null, 00:15:57.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.010 "is_configured": false, 00:15:57.010 "data_offset": 0, 00:15:57.010 "data_size": 65536 00:15:57.010 }, 00:15:57.010 { 00:15:57.010 "name": "BaseBdev2", 00:15:57.010 "uuid": "da8aadbf-7eca-5f3f-b853-25c7e22d339b", 00:15:57.010 "is_configured": true, 00:15:57.010 "data_offset": 0, 00:15:57.010 "data_size": 65536 00:15:57.010 }, 00:15:57.010 { 00:15:57.010 "name": "BaseBdev3", 00:15:57.010 "uuid": "79f6a09d-d115-5721-8cce-ddab1f6fab54", 00:15:57.010 "is_configured": true, 00:15:57.010 "data_offset": 0, 00:15:57.010 "data_size": 65536 00:15:57.010 } 00:15:57.010 ] 00:15:57.010 }' 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.010 10:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.577 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.577 10:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.577 10:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.577 [2024-11-15 10:44:18.513225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.577 [2024-11-15 10:44:18.528536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:57.577 10:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.577 10:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:57.577 [2024-11-15 10:44:18.535869] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.511 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.511 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.511 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.511 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.511 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.511 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.511 10:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.511 10:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.511 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.511 10:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.511 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.511 "name": "raid_bdev1", 00:15:58.511 "uuid": "eead8000-d10a-4ec8-9eb0-c65966f127cd", 00:15:58.511 "strip_size_kb": 64, 00:15:58.511 "state": "online", 00:15:58.511 "raid_level": "raid5f", 00:15:58.511 "superblock": false, 00:15:58.511 "num_base_bdevs": 3, 00:15:58.511 "num_base_bdevs_discovered": 3, 00:15:58.511 "num_base_bdevs_operational": 3, 00:15:58.511 "process": { 00:15:58.511 "type": "rebuild", 00:15:58.511 "target": "spare", 00:15:58.511 "progress": { 00:15:58.511 "blocks": 18432, 00:15:58.511 "percent": 14 00:15:58.511 } 00:15:58.511 }, 00:15:58.511 "base_bdevs_list": [ 00:15:58.511 { 00:15:58.511 "name": "spare", 00:15:58.511 "uuid": "94a76d09-d5b1-54ea-8e3b-fd0a326bb6aa", 00:15:58.511 "is_configured": true, 00:15:58.511 "data_offset": 0, 00:15:58.511 "data_size": 65536 00:15:58.511 }, 00:15:58.511 { 00:15:58.511 "name": "BaseBdev2", 00:15:58.511 "uuid": "da8aadbf-7eca-5f3f-b853-25c7e22d339b", 00:15:58.511 "is_configured": true, 00:15:58.511 "data_offset": 0, 00:15:58.511 "data_size": 65536 00:15:58.511 }, 00:15:58.511 { 00:15:58.511 "name": "BaseBdev3", 00:15:58.511 "uuid": "79f6a09d-d115-5721-8cce-ddab1f6fab54", 00:15:58.511 "is_configured": true, 00:15:58.511 "data_offset": 0, 00:15:58.511 "data_size": 65536 00:15:58.511 } 00:15:58.511 ] 00:15:58.511 }' 00:15:58.511 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.511 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.511 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.770 [2024-11-15 10:44:19.701952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.770 [2024-11-15 10:44:19.750259] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:58.770 [2024-11-15 10:44:19.750473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.770 [2024-11-15 10:44:19.750533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.770 [2024-11-15 10:44:19.750549] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.770 10:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.771 10:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.771 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.771 "name": "raid_bdev1", 00:15:58.771 "uuid": "eead8000-d10a-4ec8-9eb0-c65966f127cd", 00:15:58.771 "strip_size_kb": 64, 00:15:58.771 "state": "online", 00:15:58.771 "raid_level": "raid5f", 00:15:58.771 "superblock": false, 00:15:58.771 "num_base_bdevs": 3, 00:15:58.771 "num_base_bdevs_discovered": 2, 00:15:58.771 "num_base_bdevs_operational": 2, 00:15:58.771 "base_bdevs_list": [ 00:15:58.771 { 00:15:58.771 "name": null, 00:15:58.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.771 "is_configured": false, 00:15:58.771 "data_offset": 0, 00:15:58.771 "data_size": 65536 00:15:58.771 }, 00:15:58.771 { 00:15:58.771 "name": "BaseBdev2", 00:15:58.771 "uuid": "da8aadbf-7eca-5f3f-b853-25c7e22d339b", 00:15:58.771 "is_configured": true, 00:15:58.771 "data_offset": 0, 00:15:58.771 "data_size": 65536 00:15:58.771 }, 00:15:58.771 { 00:15:58.771 "name": "BaseBdev3", 00:15:58.771 "uuid": "79f6a09d-d115-5721-8cce-ddab1f6fab54", 00:15:58.771 "is_configured": true, 00:15:58.771 "data_offset": 0, 00:15:58.771 "data_size": 65536 00:15:58.771 } 00:15:58.771 ] 00:15:58.771 }' 00:15:58.771 10:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.771 10:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.338 "name": "raid_bdev1", 00:15:59.338 "uuid": "eead8000-d10a-4ec8-9eb0-c65966f127cd", 00:15:59.338 "strip_size_kb": 64, 00:15:59.338 "state": "online", 00:15:59.338 "raid_level": "raid5f", 00:15:59.338 "superblock": false, 00:15:59.338 "num_base_bdevs": 3, 00:15:59.338 "num_base_bdevs_discovered": 2, 00:15:59.338 "num_base_bdevs_operational": 2, 00:15:59.338 "base_bdevs_list": [ 00:15:59.338 { 00:15:59.338 "name": null, 00:15:59.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.338 "is_configured": false, 00:15:59.338 "data_offset": 0, 00:15:59.338 "data_size": 65536 00:15:59.338 }, 00:15:59.338 { 00:15:59.338 "name": "BaseBdev2", 00:15:59.338 "uuid": "da8aadbf-7eca-5f3f-b853-25c7e22d339b", 00:15:59.338 "is_configured": true, 00:15:59.338 "data_offset": 0, 00:15:59.338 "data_size": 65536 00:15:59.338 }, 00:15:59.338 { 00:15:59.338 "name": "BaseBdev3", 00:15:59.338 "uuid": "79f6a09d-d115-5721-8cce-ddab1f6fab54", 00:15:59.338 "is_configured": true, 00:15:59.338 "data_offset": 0, 00:15:59.338 "data_size": 65536 00:15:59.338 } 00:15:59.338 ] 00:15:59.338 }' 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.338 [2024-11-15 10:44:20.461280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.338 [2024-11-15 10:44:20.476229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.338 10:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:59.338 [2024-11-15 10:44:20.483626] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.714 "name": "raid_bdev1", 00:16:00.714 "uuid": "eead8000-d10a-4ec8-9eb0-c65966f127cd", 00:16:00.714 "strip_size_kb": 64, 00:16:00.714 "state": "online", 00:16:00.714 "raid_level": "raid5f", 00:16:00.714 "superblock": false, 00:16:00.714 "num_base_bdevs": 3, 00:16:00.714 "num_base_bdevs_discovered": 3, 00:16:00.714 "num_base_bdevs_operational": 3, 00:16:00.714 "process": { 00:16:00.714 "type": "rebuild", 00:16:00.714 "target": "spare", 00:16:00.714 "progress": { 00:16:00.714 "blocks": 18432, 00:16:00.714 "percent": 14 00:16:00.714 } 00:16:00.714 }, 00:16:00.714 "base_bdevs_list": [ 00:16:00.714 { 00:16:00.714 "name": "spare", 00:16:00.714 "uuid": "94a76d09-d5b1-54ea-8e3b-fd0a326bb6aa", 00:16:00.714 "is_configured": true, 00:16:00.714 "data_offset": 0, 00:16:00.714 "data_size": 65536 00:16:00.714 }, 00:16:00.714 { 00:16:00.714 "name": "BaseBdev2", 00:16:00.714 "uuid": "da8aadbf-7eca-5f3f-b853-25c7e22d339b", 00:16:00.714 "is_configured": true, 00:16:00.714 "data_offset": 0, 00:16:00.714 "data_size": 65536 00:16:00.714 }, 00:16:00.714 { 00:16:00.714 "name": "BaseBdev3", 00:16:00.714 "uuid": "79f6a09d-d115-5721-8cce-ddab1f6fab54", 00:16:00.714 "is_configured": true, 00:16:00.714 "data_offset": 0, 00:16:00.714 "data_size": 65536 00:16:00.714 } 00:16:00.714 ] 00:16:00.714 }' 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=590 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.714 "name": "raid_bdev1", 00:16:00.714 "uuid": "eead8000-d10a-4ec8-9eb0-c65966f127cd", 00:16:00.714 "strip_size_kb": 64, 00:16:00.714 "state": "online", 00:16:00.714 "raid_level": "raid5f", 00:16:00.714 "superblock": false, 00:16:00.714 "num_base_bdevs": 3, 00:16:00.714 "num_base_bdevs_discovered": 3, 00:16:00.714 "num_base_bdevs_operational": 3, 00:16:00.714 "process": { 00:16:00.714 "type": "rebuild", 00:16:00.714 "target": "spare", 00:16:00.714 "progress": { 00:16:00.714 "blocks": 22528, 00:16:00.714 "percent": 17 00:16:00.714 } 00:16:00.714 }, 00:16:00.714 "base_bdevs_list": [ 00:16:00.714 { 00:16:00.714 "name": "spare", 00:16:00.714 "uuid": "94a76d09-d5b1-54ea-8e3b-fd0a326bb6aa", 00:16:00.714 "is_configured": true, 00:16:00.714 "data_offset": 0, 00:16:00.714 "data_size": 65536 00:16:00.714 }, 00:16:00.714 { 00:16:00.714 "name": "BaseBdev2", 00:16:00.714 "uuid": "da8aadbf-7eca-5f3f-b853-25c7e22d339b", 00:16:00.714 "is_configured": true, 00:16:00.714 "data_offset": 0, 00:16:00.714 "data_size": 65536 00:16:00.714 }, 00:16:00.714 { 00:16:00.714 "name": "BaseBdev3", 00:16:00.714 "uuid": "79f6a09d-d115-5721-8cce-ddab1f6fab54", 00:16:00.714 "is_configured": true, 00:16:00.714 "data_offset": 0, 00:16:00.714 "data_size": 65536 00:16:00.714 } 00:16:00.714 ] 00:16:00.714 }' 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.714 10:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.702 10:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.702 10:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.702 10:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.702 10:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.702 10:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.702 10:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.702 10:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.702 10:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.702 10:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.703 10:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.703 10:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.961 10:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.961 "name": "raid_bdev1", 00:16:01.961 "uuid": "eead8000-d10a-4ec8-9eb0-c65966f127cd", 00:16:01.961 "strip_size_kb": 64, 00:16:01.961 "state": "online", 00:16:01.961 "raid_level": "raid5f", 00:16:01.961 "superblock": false, 00:16:01.961 "num_base_bdevs": 3, 00:16:01.961 "num_base_bdevs_discovered": 3, 00:16:01.961 "num_base_bdevs_operational": 3, 00:16:01.961 "process": { 00:16:01.961 "type": "rebuild", 00:16:01.961 "target": "spare", 00:16:01.961 "progress": { 00:16:01.961 "blocks": 47104, 00:16:01.961 "percent": 35 00:16:01.961 } 00:16:01.961 }, 00:16:01.961 "base_bdevs_list": [ 00:16:01.961 { 00:16:01.961 "name": "spare", 00:16:01.961 "uuid": "94a76d09-d5b1-54ea-8e3b-fd0a326bb6aa", 00:16:01.961 "is_configured": true, 00:16:01.961 "data_offset": 0, 00:16:01.961 "data_size": 65536 00:16:01.961 }, 00:16:01.961 { 00:16:01.961 "name": "BaseBdev2", 00:16:01.961 "uuid": "da8aadbf-7eca-5f3f-b853-25c7e22d339b", 00:16:01.961 "is_configured": true, 00:16:01.961 "data_offset": 0, 00:16:01.961 "data_size": 65536 00:16:01.961 }, 00:16:01.961 { 00:16:01.961 "name": "BaseBdev3", 00:16:01.961 "uuid": "79f6a09d-d115-5721-8cce-ddab1f6fab54", 00:16:01.961 "is_configured": true, 00:16:01.961 "data_offset": 0, 00:16:01.961 "data_size": 65536 00:16:01.961 } 00:16:01.961 ] 00:16:01.961 }' 00:16:01.961 10:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.961 10:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.961 10:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.961 10:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.961 10:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.896 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.896 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.896 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.896 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.896 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.896 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.896 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.896 10:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.896 10:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.896 10:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.896 10:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.896 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.896 "name": "raid_bdev1", 00:16:02.896 "uuid": "eead8000-d10a-4ec8-9eb0-c65966f127cd", 00:16:02.896 "strip_size_kb": 64, 00:16:02.896 "state": "online", 00:16:02.896 "raid_level": "raid5f", 00:16:02.896 "superblock": false, 00:16:02.896 "num_base_bdevs": 3, 00:16:02.896 "num_base_bdevs_discovered": 3, 00:16:02.896 "num_base_bdevs_operational": 3, 00:16:02.896 "process": { 00:16:02.896 "type": "rebuild", 00:16:02.896 "target": "spare", 00:16:02.896 "progress": { 00:16:02.896 "blocks": 69632, 00:16:02.896 "percent": 53 00:16:02.896 } 00:16:02.896 }, 00:16:02.896 "base_bdevs_list": [ 00:16:02.896 { 00:16:02.896 "name": "spare", 00:16:02.896 "uuid": "94a76d09-d5b1-54ea-8e3b-fd0a326bb6aa", 00:16:02.896 "is_configured": true, 00:16:02.896 "data_offset": 0, 00:16:02.896 "data_size": 65536 00:16:02.896 }, 00:16:02.896 { 00:16:02.896 "name": "BaseBdev2", 00:16:02.896 "uuid": "da8aadbf-7eca-5f3f-b853-25c7e22d339b", 00:16:02.896 "is_configured": true, 00:16:02.896 "data_offset": 0, 00:16:02.896 "data_size": 65536 00:16:02.896 }, 00:16:02.896 { 00:16:02.896 "name": "BaseBdev3", 00:16:02.896 "uuid": "79f6a09d-d115-5721-8cce-ddab1f6fab54", 00:16:02.896 "is_configured": true, 00:16:02.896 "data_offset": 0, 00:16:02.896 "data_size": 65536 00:16:02.896 } 00:16:02.896 ] 00:16:02.896 }' 00:16:02.896 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.155 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.155 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.155 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.155 10:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.090 "name": "raid_bdev1", 00:16:04.090 "uuid": "eead8000-d10a-4ec8-9eb0-c65966f127cd", 00:16:04.090 "strip_size_kb": 64, 00:16:04.090 "state": "online", 00:16:04.090 "raid_level": "raid5f", 00:16:04.090 "superblock": false, 00:16:04.090 "num_base_bdevs": 3, 00:16:04.090 "num_base_bdevs_discovered": 3, 00:16:04.090 "num_base_bdevs_operational": 3, 00:16:04.090 "process": { 00:16:04.090 "type": "rebuild", 00:16:04.090 "target": "spare", 00:16:04.090 "progress": { 00:16:04.090 "blocks": 94208, 00:16:04.090 "percent": 71 00:16:04.090 } 00:16:04.090 }, 00:16:04.090 "base_bdevs_list": [ 00:16:04.090 { 00:16:04.090 "name": "spare", 00:16:04.090 "uuid": "94a76d09-d5b1-54ea-8e3b-fd0a326bb6aa", 00:16:04.090 "is_configured": true, 00:16:04.090 "data_offset": 0, 00:16:04.090 "data_size": 65536 00:16:04.090 }, 00:16:04.090 { 00:16:04.090 "name": "BaseBdev2", 00:16:04.090 "uuid": "da8aadbf-7eca-5f3f-b853-25c7e22d339b", 00:16:04.090 "is_configured": true, 00:16:04.090 "data_offset": 0, 00:16:04.090 "data_size": 65536 00:16:04.090 }, 00:16:04.090 { 00:16:04.090 "name": "BaseBdev3", 00:16:04.090 "uuid": "79f6a09d-d115-5721-8cce-ddab1f6fab54", 00:16:04.090 "is_configured": true, 00:16:04.090 "data_offset": 0, 00:16:04.090 "data_size": 65536 00:16:04.090 } 00:16:04.090 ] 00:16:04.090 }' 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.090 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.348 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.348 10:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.282 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.282 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.282 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.282 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.282 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.283 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.283 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.283 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.283 10:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.283 10:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.283 10:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.283 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.283 "name": "raid_bdev1", 00:16:05.283 "uuid": "eead8000-d10a-4ec8-9eb0-c65966f127cd", 00:16:05.283 "strip_size_kb": 64, 00:16:05.283 "state": "online", 00:16:05.283 "raid_level": "raid5f", 00:16:05.283 "superblock": false, 00:16:05.283 "num_base_bdevs": 3, 00:16:05.283 "num_base_bdevs_discovered": 3, 00:16:05.283 "num_base_bdevs_operational": 3, 00:16:05.283 "process": { 00:16:05.283 "type": "rebuild", 00:16:05.283 "target": "spare", 00:16:05.283 "progress": { 00:16:05.283 "blocks": 116736, 00:16:05.283 "percent": 89 00:16:05.283 } 00:16:05.283 }, 00:16:05.283 "base_bdevs_list": [ 00:16:05.283 { 00:16:05.283 "name": "spare", 00:16:05.283 "uuid": "94a76d09-d5b1-54ea-8e3b-fd0a326bb6aa", 00:16:05.283 "is_configured": true, 00:16:05.283 "data_offset": 0, 00:16:05.283 "data_size": 65536 00:16:05.283 }, 00:16:05.283 { 00:16:05.283 "name": "BaseBdev2", 00:16:05.283 "uuid": "da8aadbf-7eca-5f3f-b853-25c7e22d339b", 00:16:05.283 "is_configured": true, 00:16:05.283 "data_offset": 0, 00:16:05.283 "data_size": 65536 00:16:05.283 }, 00:16:05.283 { 00:16:05.283 "name": "BaseBdev3", 00:16:05.283 "uuid": "79f6a09d-d115-5721-8cce-ddab1f6fab54", 00:16:05.283 "is_configured": true, 00:16:05.283 "data_offset": 0, 00:16:05.283 "data_size": 65536 00:16:05.283 } 00:16:05.283 ] 00:16:05.283 }' 00:16:05.283 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.283 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.283 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.541 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.541 10:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.800 [2024-11-15 10:44:26.955409] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:05.800 [2024-11-15 10:44:26.955523] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:05.800 [2024-11-15 10:44:26.955644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.367 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.368 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.368 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.368 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.368 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.368 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.368 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.368 10:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.368 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.368 10:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.368 10:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.368 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.368 "name": "raid_bdev1", 00:16:06.368 "uuid": "eead8000-d10a-4ec8-9eb0-c65966f127cd", 00:16:06.368 "strip_size_kb": 64, 00:16:06.368 "state": "online", 00:16:06.368 "raid_level": "raid5f", 00:16:06.368 "superblock": false, 00:16:06.368 "num_base_bdevs": 3, 00:16:06.368 "num_base_bdevs_discovered": 3, 00:16:06.368 "num_base_bdevs_operational": 3, 00:16:06.368 "base_bdevs_list": [ 00:16:06.368 { 00:16:06.368 "name": "spare", 00:16:06.368 "uuid": "94a76d09-d5b1-54ea-8e3b-fd0a326bb6aa", 00:16:06.368 "is_configured": true, 00:16:06.368 "data_offset": 0, 00:16:06.368 "data_size": 65536 00:16:06.368 }, 00:16:06.368 { 00:16:06.368 "name": "BaseBdev2", 00:16:06.368 "uuid": "da8aadbf-7eca-5f3f-b853-25c7e22d339b", 00:16:06.368 "is_configured": true, 00:16:06.368 "data_offset": 0, 00:16:06.368 "data_size": 65536 00:16:06.368 }, 00:16:06.368 { 00:16:06.368 "name": "BaseBdev3", 00:16:06.368 "uuid": "79f6a09d-d115-5721-8cce-ddab1f6fab54", 00:16:06.368 "is_configured": true, 00:16:06.368 "data_offset": 0, 00:16:06.368 "data_size": 65536 00:16:06.368 } 00:16:06.368 ] 00:16:06.368 }' 00:16:06.368 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.638 "name": "raid_bdev1", 00:16:06.638 "uuid": "eead8000-d10a-4ec8-9eb0-c65966f127cd", 00:16:06.638 "strip_size_kb": 64, 00:16:06.638 "state": "online", 00:16:06.638 "raid_level": "raid5f", 00:16:06.638 "superblock": false, 00:16:06.638 "num_base_bdevs": 3, 00:16:06.638 "num_base_bdevs_discovered": 3, 00:16:06.638 "num_base_bdevs_operational": 3, 00:16:06.638 "base_bdevs_list": [ 00:16:06.638 { 00:16:06.638 "name": "spare", 00:16:06.638 "uuid": "94a76d09-d5b1-54ea-8e3b-fd0a326bb6aa", 00:16:06.638 "is_configured": true, 00:16:06.638 "data_offset": 0, 00:16:06.638 "data_size": 65536 00:16:06.638 }, 00:16:06.638 { 00:16:06.638 "name": "BaseBdev2", 00:16:06.638 "uuid": "da8aadbf-7eca-5f3f-b853-25c7e22d339b", 00:16:06.638 "is_configured": true, 00:16:06.638 "data_offset": 0, 00:16:06.638 "data_size": 65536 00:16:06.638 }, 00:16:06.638 { 00:16:06.638 "name": "BaseBdev3", 00:16:06.638 "uuid": "79f6a09d-d115-5721-8cce-ddab1f6fab54", 00:16:06.638 "is_configured": true, 00:16:06.638 "data_offset": 0, 00:16:06.638 "data_size": 65536 00:16:06.638 } 00:16:06.638 ] 00:16:06.638 }' 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:06.638 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.639 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.639 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.639 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.639 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.639 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.639 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.639 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.639 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.920 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.920 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.920 10:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.920 10:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.920 10:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.920 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.920 "name": "raid_bdev1", 00:16:06.920 "uuid": "eead8000-d10a-4ec8-9eb0-c65966f127cd", 00:16:06.920 "strip_size_kb": 64, 00:16:06.920 "state": "online", 00:16:06.920 "raid_level": "raid5f", 00:16:06.920 "superblock": false, 00:16:06.921 "num_base_bdevs": 3, 00:16:06.921 "num_base_bdevs_discovered": 3, 00:16:06.921 "num_base_bdevs_operational": 3, 00:16:06.921 "base_bdevs_list": [ 00:16:06.921 { 00:16:06.921 "name": "spare", 00:16:06.921 "uuid": "94a76d09-d5b1-54ea-8e3b-fd0a326bb6aa", 00:16:06.921 "is_configured": true, 00:16:06.921 "data_offset": 0, 00:16:06.921 "data_size": 65536 00:16:06.921 }, 00:16:06.921 { 00:16:06.921 "name": "BaseBdev2", 00:16:06.921 "uuid": "da8aadbf-7eca-5f3f-b853-25c7e22d339b", 00:16:06.921 "is_configured": true, 00:16:06.921 "data_offset": 0, 00:16:06.921 "data_size": 65536 00:16:06.921 }, 00:16:06.921 { 00:16:06.921 "name": "BaseBdev3", 00:16:06.921 "uuid": "79f6a09d-d115-5721-8cce-ddab1f6fab54", 00:16:06.921 "is_configured": true, 00:16:06.921 "data_offset": 0, 00:16:06.921 "data_size": 65536 00:16:06.921 } 00:16:06.921 ] 00:16:06.921 }' 00:16:06.921 10:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.921 10:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.184 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:07.184 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.184 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.184 [2024-11-15 10:44:28.293664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.185 [2024-11-15 10:44:28.293700] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.185 [2024-11-15 10:44:28.293811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.185 [2024-11-15 10:44:28.293945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.185 [2024-11-15 10:44:28.293974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:07.185 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.185 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.185 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:07.185 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.185 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.185 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.443 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:07.443 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:07.443 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:07.443 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:07.443 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.443 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:07.443 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:07.443 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:07.443 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:07.443 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:07.443 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:07.443 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:07.443 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:07.702 /dev/nbd0 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.702 1+0 records in 00:16:07.702 1+0 records out 00:16:07.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060634 s, 6.8 MB/s 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:07.702 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:07.961 /dev/nbd1 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.961 1+0 records in 00:16:07.961 1+0 records out 00:16:07.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038282 s, 10.7 MB/s 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.961 10:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:07.961 10:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:07.961 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.961 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:07.961 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:08.218 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:08.218 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:08.218 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:08.218 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:08.218 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:08.218 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.218 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:08.476 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:08.476 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:08.476 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:08.476 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.476 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.476 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:08.476 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:08.476 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.476 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.476 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81902 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81902 ']' 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81902 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81902 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:08.734 killing process with pid 81902 00:16:08.734 Received shutdown signal, test time was about 60.000000 seconds 00:16:08.734 00:16:08.734 Latency(us) 00:16:08.734 [2024-11-15T10:44:29.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.734 [2024-11-15T10:44:29.896Z] =================================================================================================================== 00:16:08.734 [2024-11-15T10:44:29.896Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81902' 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81902 00:16:08.734 [2024-11-15 10:44:29.816298] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:08.734 10:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81902 00:16:09.303 [2024-11-15 10:44:30.170913] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:10.239 ************************************ 00:16:10.239 END TEST raid5f_rebuild_test 00:16:10.239 ************************************ 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:10.239 00:16:10.239 real 0m16.291s 00:16:10.239 user 0m20.897s 00:16:10.239 sys 0m1.936s 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.239 10:44:31 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:10.239 10:44:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:10.239 10:44:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.239 10:44:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:10.239 ************************************ 00:16:10.239 START TEST raid5f_rebuild_test_sb 00:16:10.239 ************************************ 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:10.239 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82355 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82355 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82355 ']' 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.240 10:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.240 [2024-11-15 10:44:31.347641] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:16:10.240 [2024-11-15 10:44:31.347805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82355 ] 00:16:10.240 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:10.240 Zero copy mechanism will not be used. 00:16:10.499 [2024-11-15 10:44:31.522446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.499 [2024-11-15 10:44:31.651190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.757 [2024-11-15 10:44:31.852797] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.757 [2024-11-15 10:44:31.852883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.325 BaseBdev1_malloc 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.325 [2024-11-15 10:44:32.361181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:11.325 [2024-11-15 10:44:32.361276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.325 [2024-11-15 10:44:32.361310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:11.325 [2024-11-15 10:44:32.361330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.325 [2024-11-15 10:44:32.364191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.325 [2024-11-15 10:44:32.364256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:11.325 BaseBdev1 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.325 BaseBdev2_malloc 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.325 [2024-11-15 10:44:32.416807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:11.325 [2024-11-15 10:44:32.416879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.325 [2024-11-15 10:44:32.416906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:11.325 [2024-11-15 10:44:32.416926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.325 [2024-11-15 10:44:32.419609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.325 [2024-11-15 10:44:32.419657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:11.325 BaseBdev2 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.325 BaseBdev3_malloc 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.325 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.584 [2024-11-15 10:44:32.486166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:11.584 [2024-11-15 10:44:32.486234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.584 [2024-11-15 10:44:32.486264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:11.584 [2024-11-15 10:44:32.486283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.584 [2024-11-15 10:44:32.488951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.584 [2024-11-15 10:44:32.489002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:11.584 BaseBdev3 00:16:11.584 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.585 spare_malloc 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.585 spare_delay 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.585 [2024-11-15 10:44:32.545983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:11.585 [2024-11-15 10:44:32.546047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.585 [2024-11-15 10:44:32.546073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:11.585 [2024-11-15 10:44:32.546090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.585 [2024-11-15 10:44:32.548813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.585 [2024-11-15 10:44:32.548864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:11.585 spare 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.585 [2024-11-15 10:44:32.554070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.585 [2024-11-15 10:44:32.556411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.585 [2024-11-15 10:44:32.556545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.585 [2024-11-15 10:44:32.556793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:11.585 [2024-11-15 10:44:32.556816] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:11.585 [2024-11-15 10:44:32.557138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:11.585 [2024-11-15 10:44:32.562201] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:11.585 [2024-11-15 10:44:32.562237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:11.585 [2024-11-15 10:44:32.562459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.585 "name": "raid_bdev1", 00:16:11.585 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:11.585 "strip_size_kb": 64, 00:16:11.585 "state": "online", 00:16:11.585 "raid_level": "raid5f", 00:16:11.585 "superblock": true, 00:16:11.585 "num_base_bdevs": 3, 00:16:11.585 "num_base_bdevs_discovered": 3, 00:16:11.585 "num_base_bdevs_operational": 3, 00:16:11.585 "base_bdevs_list": [ 00:16:11.585 { 00:16:11.585 "name": "BaseBdev1", 00:16:11.585 "uuid": "e8dfb31e-e36d-5cf0-85d3-b7674d69ec47", 00:16:11.585 "is_configured": true, 00:16:11.585 "data_offset": 2048, 00:16:11.585 "data_size": 63488 00:16:11.585 }, 00:16:11.585 { 00:16:11.585 "name": "BaseBdev2", 00:16:11.585 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:11.585 "is_configured": true, 00:16:11.585 "data_offset": 2048, 00:16:11.585 "data_size": 63488 00:16:11.585 }, 00:16:11.585 { 00:16:11.585 "name": "BaseBdev3", 00:16:11.585 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:11.585 "is_configured": true, 00:16:11.585 "data_offset": 2048, 00:16:11.585 "data_size": 63488 00:16:11.585 } 00:16:11.585 ] 00:16:11.585 }' 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.585 10:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.189 [2024-11-15 10:44:33.048419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:12.189 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:12.448 [2024-11-15 10:44:33.400313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:12.448 /dev/nbd0 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:12.448 1+0 records in 00:16:12.448 1+0 records out 00:16:12.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240382 s, 17.0 MB/s 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:12.448 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:12.449 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:12.449 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:12.449 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:12.449 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:12.449 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:12.449 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:13.014 496+0 records in 00:16:13.014 496+0 records out 00:16:13.014 65011712 bytes (65 MB, 62 MiB) copied, 0.415832 s, 156 MB/s 00:16:13.014 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:13.014 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:13.014 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:13.014 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:13.014 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:13.014 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.014 10:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:13.014 [2024-11-15 10:44:34.171067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.274 [2024-11-15 10:44:34.200869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.274 "name": "raid_bdev1", 00:16:13.274 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:13.274 "strip_size_kb": 64, 00:16:13.274 "state": "online", 00:16:13.274 "raid_level": "raid5f", 00:16:13.274 "superblock": true, 00:16:13.274 "num_base_bdevs": 3, 00:16:13.274 "num_base_bdevs_discovered": 2, 00:16:13.274 "num_base_bdevs_operational": 2, 00:16:13.274 "base_bdevs_list": [ 00:16:13.274 { 00:16:13.274 "name": null, 00:16:13.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.274 "is_configured": false, 00:16:13.274 "data_offset": 0, 00:16:13.274 "data_size": 63488 00:16:13.274 }, 00:16:13.274 { 00:16:13.274 "name": "BaseBdev2", 00:16:13.274 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:13.274 "is_configured": true, 00:16:13.274 "data_offset": 2048, 00:16:13.274 "data_size": 63488 00:16:13.274 }, 00:16:13.274 { 00:16:13.274 "name": "BaseBdev3", 00:16:13.274 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:13.274 "is_configured": true, 00:16:13.274 "data_offset": 2048, 00:16:13.274 "data_size": 63488 00:16:13.274 } 00:16:13.274 ] 00:16:13.274 }' 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.274 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.850 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:13.850 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.850 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.850 [2024-11-15 10:44:34.717019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.850 [2024-11-15 10:44:34.732292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:13.850 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.850 10:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:13.850 [2024-11-15 10:44:34.739558] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.784 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.784 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.784 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.785 "name": "raid_bdev1", 00:16:14.785 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:14.785 "strip_size_kb": 64, 00:16:14.785 "state": "online", 00:16:14.785 "raid_level": "raid5f", 00:16:14.785 "superblock": true, 00:16:14.785 "num_base_bdevs": 3, 00:16:14.785 "num_base_bdevs_discovered": 3, 00:16:14.785 "num_base_bdevs_operational": 3, 00:16:14.785 "process": { 00:16:14.785 "type": "rebuild", 00:16:14.785 "target": "spare", 00:16:14.785 "progress": { 00:16:14.785 "blocks": 18432, 00:16:14.785 "percent": 14 00:16:14.785 } 00:16:14.785 }, 00:16:14.785 "base_bdevs_list": [ 00:16:14.785 { 00:16:14.785 "name": "spare", 00:16:14.785 "uuid": "d0abc887-2797-5e2c-850b-b37a980087c5", 00:16:14.785 "is_configured": true, 00:16:14.785 "data_offset": 2048, 00:16:14.785 "data_size": 63488 00:16:14.785 }, 00:16:14.785 { 00:16:14.785 "name": "BaseBdev2", 00:16:14.785 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:14.785 "is_configured": true, 00:16:14.785 "data_offset": 2048, 00:16:14.785 "data_size": 63488 00:16:14.785 }, 00:16:14.785 { 00:16:14.785 "name": "BaseBdev3", 00:16:14.785 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:14.785 "is_configured": true, 00:16:14.785 "data_offset": 2048, 00:16:14.785 "data_size": 63488 00:16:14.785 } 00:16:14.785 ] 00:16:14.785 }' 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.785 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.785 [2024-11-15 10:44:35.901295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:15.043 [2024-11-15 10:44:35.953334] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:15.043 [2024-11-15 10:44:35.953425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.043 [2024-11-15 10:44:35.953453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:15.044 [2024-11-15 10:44:35.953466] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.044 10:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.044 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.044 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.044 "name": "raid_bdev1", 00:16:15.044 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:15.044 "strip_size_kb": 64, 00:16:15.044 "state": "online", 00:16:15.044 "raid_level": "raid5f", 00:16:15.044 "superblock": true, 00:16:15.044 "num_base_bdevs": 3, 00:16:15.044 "num_base_bdevs_discovered": 2, 00:16:15.044 "num_base_bdevs_operational": 2, 00:16:15.044 "base_bdevs_list": [ 00:16:15.044 { 00:16:15.044 "name": null, 00:16:15.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.044 "is_configured": false, 00:16:15.044 "data_offset": 0, 00:16:15.044 "data_size": 63488 00:16:15.044 }, 00:16:15.044 { 00:16:15.044 "name": "BaseBdev2", 00:16:15.044 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:15.044 "is_configured": true, 00:16:15.044 "data_offset": 2048, 00:16:15.044 "data_size": 63488 00:16:15.044 }, 00:16:15.044 { 00:16:15.044 "name": "BaseBdev3", 00:16:15.044 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:15.044 "is_configured": true, 00:16:15.044 "data_offset": 2048, 00:16:15.044 "data_size": 63488 00:16:15.044 } 00:16:15.044 ] 00:16:15.044 }' 00:16:15.044 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.044 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.612 "name": "raid_bdev1", 00:16:15.612 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:15.612 "strip_size_kb": 64, 00:16:15.612 "state": "online", 00:16:15.612 "raid_level": "raid5f", 00:16:15.612 "superblock": true, 00:16:15.612 "num_base_bdevs": 3, 00:16:15.612 "num_base_bdevs_discovered": 2, 00:16:15.612 "num_base_bdevs_operational": 2, 00:16:15.612 "base_bdevs_list": [ 00:16:15.612 { 00:16:15.612 "name": null, 00:16:15.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.612 "is_configured": false, 00:16:15.612 "data_offset": 0, 00:16:15.612 "data_size": 63488 00:16:15.612 }, 00:16:15.612 { 00:16:15.612 "name": "BaseBdev2", 00:16:15.612 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:15.612 "is_configured": true, 00:16:15.612 "data_offset": 2048, 00:16:15.612 "data_size": 63488 00:16:15.612 }, 00:16:15.612 { 00:16:15.612 "name": "BaseBdev3", 00:16:15.612 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:15.612 "is_configured": true, 00:16:15.612 "data_offset": 2048, 00:16:15.612 "data_size": 63488 00:16:15.612 } 00:16:15.612 ] 00:16:15.612 }' 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.612 [2024-11-15 10:44:36.660252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:15.612 [2024-11-15 10:44:36.675063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.612 10:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:15.612 [2024-11-15 10:44:36.682239] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:16.547 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.547 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.547 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.547 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.547 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.547 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.547 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.547 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.547 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.547 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.806 "name": "raid_bdev1", 00:16:16.806 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:16.806 "strip_size_kb": 64, 00:16:16.806 "state": "online", 00:16:16.806 "raid_level": "raid5f", 00:16:16.806 "superblock": true, 00:16:16.806 "num_base_bdevs": 3, 00:16:16.806 "num_base_bdevs_discovered": 3, 00:16:16.806 "num_base_bdevs_operational": 3, 00:16:16.806 "process": { 00:16:16.806 "type": "rebuild", 00:16:16.806 "target": "spare", 00:16:16.806 "progress": { 00:16:16.806 "blocks": 18432, 00:16:16.806 "percent": 14 00:16:16.806 } 00:16:16.806 }, 00:16:16.806 "base_bdevs_list": [ 00:16:16.806 { 00:16:16.806 "name": "spare", 00:16:16.806 "uuid": "d0abc887-2797-5e2c-850b-b37a980087c5", 00:16:16.806 "is_configured": true, 00:16:16.806 "data_offset": 2048, 00:16:16.806 "data_size": 63488 00:16:16.806 }, 00:16:16.806 { 00:16:16.806 "name": "BaseBdev2", 00:16:16.806 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:16.806 "is_configured": true, 00:16:16.806 "data_offset": 2048, 00:16:16.806 "data_size": 63488 00:16:16.806 }, 00:16:16.806 { 00:16:16.806 "name": "BaseBdev3", 00:16:16.806 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:16.806 "is_configured": true, 00:16:16.806 "data_offset": 2048, 00:16:16.806 "data_size": 63488 00:16:16.806 } 00:16:16.806 ] 00:16:16.806 }' 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:16.806 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=606 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.806 "name": "raid_bdev1", 00:16:16.806 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:16.806 "strip_size_kb": 64, 00:16:16.806 "state": "online", 00:16:16.806 "raid_level": "raid5f", 00:16:16.806 "superblock": true, 00:16:16.806 "num_base_bdevs": 3, 00:16:16.806 "num_base_bdevs_discovered": 3, 00:16:16.806 "num_base_bdevs_operational": 3, 00:16:16.806 "process": { 00:16:16.806 "type": "rebuild", 00:16:16.806 "target": "spare", 00:16:16.806 "progress": { 00:16:16.806 "blocks": 22528, 00:16:16.806 "percent": 17 00:16:16.806 } 00:16:16.806 }, 00:16:16.806 "base_bdevs_list": [ 00:16:16.806 { 00:16:16.806 "name": "spare", 00:16:16.806 "uuid": "d0abc887-2797-5e2c-850b-b37a980087c5", 00:16:16.806 "is_configured": true, 00:16:16.806 "data_offset": 2048, 00:16:16.806 "data_size": 63488 00:16:16.806 }, 00:16:16.806 { 00:16:16.806 "name": "BaseBdev2", 00:16:16.806 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:16.806 "is_configured": true, 00:16:16.806 "data_offset": 2048, 00:16:16.806 "data_size": 63488 00:16:16.806 }, 00:16:16.806 { 00:16:16.806 "name": "BaseBdev3", 00:16:16.806 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:16.806 "is_configured": true, 00:16:16.806 "data_offset": 2048, 00:16:16.806 "data_size": 63488 00:16:16.806 } 00:16:16.806 ] 00:16:16.806 }' 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.806 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.064 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.064 10:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.999 10:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.999 10:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.999 10:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.999 10:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.999 10:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.999 10:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.999 10:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.999 10:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.999 10:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.999 10:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.999 10:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.999 10:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.999 "name": "raid_bdev1", 00:16:17.999 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:17.999 "strip_size_kb": 64, 00:16:17.999 "state": "online", 00:16:17.999 "raid_level": "raid5f", 00:16:17.999 "superblock": true, 00:16:17.999 "num_base_bdevs": 3, 00:16:17.999 "num_base_bdevs_discovered": 3, 00:16:17.999 "num_base_bdevs_operational": 3, 00:16:17.999 "process": { 00:16:17.999 "type": "rebuild", 00:16:17.999 "target": "spare", 00:16:17.999 "progress": { 00:16:17.999 "blocks": 45056, 00:16:17.999 "percent": 35 00:16:17.999 } 00:16:17.999 }, 00:16:17.999 "base_bdevs_list": [ 00:16:17.999 { 00:16:17.999 "name": "spare", 00:16:17.999 "uuid": "d0abc887-2797-5e2c-850b-b37a980087c5", 00:16:17.999 "is_configured": true, 00:16:17.999 "data_offset": 2048, 00:16:17.999 "data_size": 63488 00:16:17.999 }, 00:16:17.999 { 00:16:17.999 "name": "BaseBdev2", 00:16:17.999 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:17.999 "is_configured": true, 00:16:17.999 "data_offset": 2048, 00:16:17.999 "data_size": 63488 00:16:17.999 }, 00:16:17.999 { 00:16:17.999 "name": "BaseBdev3", 00:16:17.999 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:17.999 "is_configured": true, 00:16:17.999 "data_offset": 2048, 00:16:17.999 "data_size": 63488 00:16:17.999 } 00:16:17.999 ] 00:16:17.999 }' 00:16:17.999 10:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.999 10:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.999 10:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.999 10:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.999 10:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.375 "name": "raid_bdev1", 00:16:19.375 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:19.375 "strip_size_kb": 64, 00:16:19.375 "state": "online", 00:16:19.375 "raid_level": "raid5f", 00:16:19.375 "superblock": true, 00:16:19.375 "num_base_bdevs": 3, 00:16:19.375 "num_base_bdevs_discovered": 3, 00:16:19.375 "num_base_bdevs_operational": 3, 00:16:19.375 "process": { 00:16:19.375 "type": "rebuild", 00:16:19.375 "target": "spare", 00:16:19.375 "progress": { 00:16:19.375 "blocks": 69632, 00:16:19.375 "percent": 54 00:16:19.375 } 00:16:19.375 }, 00:16:19.375 "base_bdevs_list": [ 00:16:19.375 { 00:16:19.375 "name": "spare", 00:16:19.375 "uuid": "d0abc887-2797-5e2c-850b-b37a980087c5", 00:16:19.375 "is_configured": true, 00:16:19.375 "data_offset": 2048, 00:16:19.375 "data_size": 63488 00:16:19.375 }, 00:16:19.375 { 00:16:19.375 "name": "BaseBdev2", 00:16:19.375 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:19.375 "is_configured": true, 00:16:19.375 "data_offset": 2048, 00:16:19.375 "data_size": 63488 00:16:19.375 }, 00:16:19.375 { 00:16:19.375 "name": "BaseBdev3", 00:16:19.375 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:19.375 "is_configured": true, 00:16:19.375 "data_offset": 2048, 00:16:19.375 "data_size": 63488 00:16:19.375 } 00:16:19.375 ] 00:16:19.375 }' 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.375 10:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.309 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.309 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.309 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.309 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.309 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.309 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.309 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.309 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.309 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.309 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.309 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.309 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.309 "name": "raid_bdev1", 00:16:20.310 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:20.310 "strip_size_kb": 64, 00:16:20.310 "state": "online", 00:16:20.310 "raid_level": "raid5f", 00:16:20.310 "superblock": true, 00:16:20.310 "num_base_bdevs": 3, 00:16:20.310 "num_base_bdevs_discovered": 3, 00:16:20.310 "num_base_bdevs_operational": 3, 00:16:20.310 "process": { 00:16:20.310 "type": "rebuild", 00:16:20.310 "target": "spare", 00:16:20.310 "progress": { 00:16:20.310 "blocks": 92160, 00:16:20.310 "percent": 72 00:16:20.310 } 00:16:20.310 }, 00:16:20.310 "base_bdevs_list": [ 00:16:20.310 { 00:16:20.310 "name": "spare", 00:16:20.310 "uuid": "d0abc887-2797-5e2c-850b-b37a980087c5", 00:16:20.310 "is_configured": true, 00:16:20.310 "data_offset": 2048, 00:16:20.310 "data_size": 63488 00:16:20.310 }, 00:16:20.310 { 00:16:20.310 "name": "BaseBdev2", 00:16:20.310 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:20.310 "is_configured": true, 00:16:20.310 "data_offset": 2048, 00:16:20.310 "data_size": 63488 00:16:20.310 }, 00:16:20.310 { 00:16:20.310 "name": "BaseBdev3", 00:16:20.310 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:20.310 "is_configured": true, 00:16:20.310 "data_offset": 2048, 00:16:20.310 "data_size": 63488 00:16:20.310 } 00:16:20.310 ] 00:16:20.310 }' 00:16:20.310 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.310 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.310 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.570 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.570 10:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.506 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.506 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.506 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.506 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.506 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.506 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.506 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.506 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.506 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.506 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.506 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.506 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.506 "name": "raid_bdev1", 00:16:21.506 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:21.506 "strip_size_kb": 64, 00:16:21.506 "state": "online", 00:16:21.506 "raid_level": "raid5f", 00:16:21.506 "superblock": true, 00:16:21.506 "num_base_bdevs": 3, 00:16:21.506 "num_base_bdevs_discovered": 3, 00:16:21.506 "num_base_bdevs_operational": 3, 00:16:21.506 "process": { 00:16:21.506 "type": "rebuild", 00:16:21.506 "target": "spare", 00:16:21.506 "progress": { 00:16:21.506 "blocks": 116736, 00:16:21.506 "percent": 91 00:16:21.506 } 00:16:21.506 }, 00:16:21.506 "base_bdevs_list": [ 00:16:21.506 { 00:16:21.506 "name": "spare", 00:16:21.506 "uuid": "d0abc887-2797-5e2c-850b-b37a980087c5", 00:16:21.506 "is_configured": true, 00:16:21.506 "data_offset": 2048, 00:16:21.506 "data_size": 63488 00:16:21.506 }, 00:16:21.506 { 00:16:21.506 "name": "BaseBdev2", 00:16:21.506 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:21.506 "is_configured": true, 00:16:21.506 "data_offset": 2048, 00:16:21.506 "data_size": 63488 00:16:21.506 }, 00:16:21.506 { 00:16:21.506 "name": "BaseBdev3", 00:16:21.506 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:21.506 "is_configured": true, 00:16:21.506 "data_offset": 2048, 00:16:21.506 "data_size": 63488 00:16:21.506 } 00:16:21.506 ] 00:16:21.506 }' 00:16:21.506 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.506 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.507 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.507 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.507 10:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.073 [2024-11-15 10:44:42.948818] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:22.073 [2024-11-15 10:44:42.948929] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:22.073 [2024-11-15 10:44:42.949066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.641 "name": "raid_bdev1", 00:16:22.641 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:22.641 "strip_size_kb": 64, 00:16:22.641 "state": "online", 00:16:22.641 "raid_level": "raid5f", 00:16:22.641 "superblock": true, 00:16:22.641 "num_base_bdevs": 3, 00:16:22.641 "num_base_bdevs_discovered": 3, 00:16:22.641 "num_base_bdevs_operational": 3, 00:16:22.641 "base_bdevs_list": [ 00:16:22.641 { 00:16:22.641 "name": "spare", 00:16:22.641 "uuid": "d0abc887-2797-5e2c-850b-b37a980087c5", 00:16:22.641 "is_configured": true, 00:16:22.641 "data_offset": 2048, 00:16:22.641 "data_size": 63488 00:16:22.641 }, 00:16:22.641 { 00:16:22.641 "name": "BaseBdev2", 00:16:22.641 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:22.641 "is_configured": true, 00:16:22.641 "data_offset": 2048, 00:16:22.641 "data_size": 63488 00:16:22.641 }, 00:16:22.641 { 00:16:22.641 "name": "BaseBdev3", 00:16:22.641 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:22.641 "is_configured": true, 00:16:22.641 "data_offset": 2048, 00:16:22.641 "data_size": 63488 00:16:22.641 } 00:16:22.641 ] 00:16:22.641 }' 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.641 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.900 "name": "raid_bdev1", 00:16:22.900 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:22.900 "strip_size_kb": 64, 00:16:22.900 "state": "online", 00:16:22.900 "raid_level": "raid5f", 00:16:22.900 "superblock": true, 00:16:22.900 "num_base_bdevs": 3, 00:16:22.900 "num_base_bdevs_discovered": 3, 00:16:22.900 "num_base_bdevs_operational": 3, 00:16:22.900 "base_bdevs_list": [ 00:16:22.900 { 00:16:22.900 "name": "spare", 00:16:22.900 "uuid": "d0abc887-2797-5e2c-850b-b37a980087c5", 00:16:22.900 "is_configured": true, 00:16:22.900 "data_offset": 2048, 00:16:22.900 "data_size": 63488 00:16:22.900 }, 00:16:22.900 { 00:16:22.900 "name": "BaseBdev2", 00:16:22.900 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:22.900 "is_configured": true, 00:16:22.900 "data_offset": 2048, 00:16:22.900 "data_size": 63488 00:16:22.900 }, 00:16:22.900 { 00:16:22.900 "name": "BaseBdev3", 00:16:22.900 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:22.900 "is_configured": true, 00:16:22.900 "data_offset": 2048, 00:16:22.900 "data_size": 63488 00:16:22.900 } 00:16:22.900 ] 00:16:22.900 }' 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.900 10:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.900 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.900 "name": "raid_bdev1", 00:16:22.900 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:22.900 "strip_size_kb": 64, 00:16:22.900 "state": "online", 00:16:22.900 "raid_level": "raid5f", 00:16:22.900 "superblock": true, 00:16:22.900 "num_base_bdevs": 3, 00:16:22.900 "num_base_bdevs_discovered": 3, 00:16:22.900 "num_base_bdevs_operational": 3, 00:16:22.900 "base_bdevs_list": [ 00:16:22.900 { 00:16:22.900 "name": "spare", 00:16:22.900 "uuid": "d0abc887-2797-5e2c-850b-b37a980087c5", 00:16:22.900 "is_configured": true, 00:16:22.900 "data_offset": 2048, 00:16:22.900 "data_size": 63488 00:16:22.900 }, 00:16:22.900 { 00:16:22.900 "name": "BaseBdev2", 00:16:22.900 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:22.900 "is_configured": true, 00:16:22.900 "data_offset": 2048, 00:16:22.900 "data_size": 63488 00:16:22.900 }, 00:16:22.900 { 00:16:22.900 "name": "BaseBdev3", 00:16:22.900 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:22.900 "is_configured": true, 00:16:22.900 "data_offset": 2048, 00:16:22.900 "data_size": 63488 00:16:22.900 } 00:16:22.900 ] 00:16:22.900 }' 00:16:22.900 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.900 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.466 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.467 [2024-11-15 10:44:44.476370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.467 [2024-11-15 10:44:44.476436] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.467 [2024-11-15 10:44:44.476560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.467 [2024-11-15 10:44:44.476711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.467 [2024-11-15 10:44:44.476738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.467 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:23.724 /dev/nbd0 00:16:23.724 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:23.724 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:23.724 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:23.724 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:23.724 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:23.724 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:23.724 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:23.724 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:23.724 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:23.724 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:23.724 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:23.724 1+0 records in 00:16:23.724 1+0 records out 00:16:23.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344178 s, 11.9 MB/s 00:16:23.724 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.724 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:23.724 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.982 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:23.982 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:23.982 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:23.982 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.982 10:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:24.240 /dev/nbd1 00:16:24.240 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:24.240 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:24.240 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:24.240 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:24.240 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:24.240 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:24.240 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:24.240 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:24.240 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:24.240 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:24.241 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:24.241 1+0 records in 00:16:24.241 1+0 records out 00:16:24.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260525 s, 15.7 MB/s 00:16:24.241 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.241 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:24.241 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.241 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:24.241 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:24.241 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:24.241 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:24.241 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:24.500 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:24.500 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.500 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:24.500 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:24.500 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:24.500 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.500 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:24.759 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:24.759 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:24.759 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:24.759 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.759 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.759 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:24.759 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:24.759 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.759 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.759 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.018 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.018 [2024-11-15 10:44:45.995429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:25.018 [2024-11-15 10:44:45.995537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.018 [2024-11-15 10:44:45.995567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:25.018 [2024-11-15 10:44:45.995584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.018 [2024-11-15 10:44:45.998604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.018 [2024-11-15 10:44:45.998655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:25.018 [2024-11-15 10:44:45.998771] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:25.018 [2024-11-15 10:44:45.998849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:25.018 [2024-11-15 10:44:45.999018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.018 [2024-11-15 10:44:45.999181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.018 spare 00:16:25.019 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.019 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:25.019 10:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.019 [2024-11-15 10:44:46.099285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:25.019 [2024-11-15 10:44:46.099320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:25.019 [2024-11-15 10:44:46.099678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:25.019 [2024-11-15 10:44:46.104605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:25.019 [2024-11-15 10:44:46.104631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:25.019 [2024-11-15 10:44:46.104883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.019 "name": "raid_bdev1", 00:16:25.019 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:25.019 "strip_size_kb": 64, 00:16:25.019 "state": "online", 00:16:25.019 "raid_level": "raid5f", 00:16:25.019 "superblock": true, 00:16:25.019 "num_base_bdevs": 3, 00:16:25.019 "num_base_bdevs_discovered": 3, 00:16:25.019 "num_base_bdevs_operational": 3, 00:16:25.019 "base_bdevs_list": [ 00:16:25.019 { 00:16:25.019 "name": "spare", 00:16:25.019 "uuid": "d0abc887-2797-5e2c-850b-b37a980087c5", 00:16:25.019 "is_configured": true, 00:16:25.019 "data_offset": 2048, 00:16:25.019 "data_size": 63488 00:16:25.019 }, 00:16:25.019 { 00:16:25.019 "name": "BaseBdev2", 00:16:25.019 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:25.019 "is_configured": true, 00:16:25.019 "data_offset": 2048, 00:16:25.019 "data_size": 63488 00:16:25.019 }, 00:16:25.019 { 00:16:25.019 "name": "BaseBdev3", 00:16:25.019 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:25.019 "is_configured": true, 00:16:25.019 "data_offset": 2048, 00:16:25.019 "data_size": 63488 00:16:25.019 } 00:16:25.019 ] 00:16:25.019 }' 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.019 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.586 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.586 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.586 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.586 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.586 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.586 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.586 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.586 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.586 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.586 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.586 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.586 "name": "raid_bdev1", 00:16:25.586 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:25.586 "strip_size_kb": 64, 00:16:25.586 "state": "online", 00:16:25.586 "raid_level": "raid5f", 00:16:25.586 "superblock": true, 00:16:25.586 "num_base_bdevs": 3, 00:16:25.586 "num_base_bdevs_discovered": 3, 00:16:25.586 "num_base_bdevs_operational": 3, 00:16:25.586 "base_bdevs_list": [ 00:16:25.586 { 00:16:25.586 "name": "spare", 00:16:25.586 "uuid": "d0abc887-2797-5e2c-850b-b37a980087c5", 00:16:25.586 "is_configured": true, 00:16:25.586 "data_offset": 2048, 00:16:25.587 "data_size": 63488 00:16:25.587 }, 00:16:25.587 { 00:16:25.587 "name": "BaseBdev2", 00:16:25.587 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:25.587 "is_configured": true, 00:16:25.587 "data_offset": 2048, 00:16:25.587 "data_size": 63488 00:16:25.587 }, 00:16:25.587 { 00:16:25.587 "name": "BaseBdev3", 00:16:25.587 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:25.587 "is_configured": true, 00:16:25.587 "data_offset": 2048, 00:16:25.587 "data_size": 63488 00:16:25.587 } 00:16:25.587 ] 00:16:25.587 }' 00:16:25.587 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.587 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.587 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.845 [2024-11-15 10:44:46.830815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.845 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.845 "name": "raid_bdev1", 00:16:25.845 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:25.845 "strip_size_kb": 64, 00:16:25.845 "state": "online", 00:16:25.845 "raid_level": "raid5f", 00:16:25.845 "superblock": true, 00:16:25.845 "num_base_bdevs": 3, 00:16:25.845 "num_base_bdevs_discovered": 2, 00:16:25.845 "num_base_bdevs_operational": 2, 00:16:25.845 "base_bdevs_list": [ 00:16:25.845 { 00:16:25.846 "name": null, 00:16:25.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.846 "is_configured": false, 00:16:25.846 "data_offset": 0, 00:16:25.846 "data_size": 63488 00:16:25.846 }, 00:16:25.846 { 00:16:25.846 "name": "BaseBdev2", 00:16:25.846 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:25.846 "is_configured": true, 00:16:25.846 "data_offset": 2048, 00:16:25.846 "data_size": 63488 00:16:25.846 }, 00:16:25.846 { 00:16:25.846 "name": "BaseBdev3", 00:16:25.846 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:25.846 "is_configured": true, 00:16:25.846 "data_offset": 2048, 00:16:25.846 "data_size": 63488 00:16:25.846 } 00:16:25.846 ] 00:16:25.846 }' 00:16:25.846 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.846 10:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.413 10:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:26.413 10:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.413 10:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.413 [2024-11-15 10:44:47.323002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.413 [2024-11-15 10:44:47.323224] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:26.413 [2024-11-15 10:44:47.323252] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:26.413 [2024-11-15 10:44:47.323302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.413 [2024-11-15 10:44:47.337637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:26.413 10:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.413 10:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:26.413 [2024-11-15 10:44:47.344705] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.348 "name": "raid_bdev1", 00:16:27.348 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:27.348 "strip_size_kb": 64, 00:16:27.348 "state": "online", 00:16:27.348 "raid_level": "raid5f", 00:16:27.348 "superblock": true, 00:16:27.348 "num_base_bdevs": 3, 00:16:27.348 "num_base_bdevs_discovered": 3, 00:16:27.348 "num_base_bdevs_operational": 3, 00:16:27.348 "process": { 00:16:27.348 "type": "rebuild", 00:16:27.348 "target": "spare", 00:16:27.348 "progress": { 00:16:27.348 "blocks": 18432, 00:16:27.348 "percent": 14 00:16:27.348 } 00:16:27.348 }, 00:16:27.348 "base_bdevs_list": [ 00:16:27.348 { 00:16:27.348 "name": "spare", 00:16:27.348 "uuid": "d0abc887-2797-5e2c-850b-b37a980087c5", 00:16:27.348 "is_configured": true, 00:16:27.348 "data_offset": 2048, 00:16:27.348 "data_size": 63488 00:16:27.348 }, 00:16:27.348 { 00:16:27.348 "name": "BaseBdev2", 00:16:27.348 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:27.348 "is_configured": true, 00:16:27.348 "data_offset": 2048, 00:16:27.348 "data_size": 63488 00:16:27.348 }, 00:16:27.348 { 00:16:27.348 "name": "BaseBdev3", 00:16:27.348 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:27.348 "is_configured": true, 00:16:27.348 "data_offset": 2048, 00:16:27.348 "data_size": 63488 00:16:27.348 } 00:16:27.348 ] 00:16:27.348 }' 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.348 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.606 [2024-11-15 10:44:48.510847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.606 [2024-11-15 10:44:48.559072] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:27.606 [2024-11-15 10:44:48.559153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.606 [2024-11-15 10:44:48.559178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.606 [2024-11-15 10:44:48.559193] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:27.606 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.606 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.607 "name": "raid_bdev1", 00:16:27.607 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:27.607 "strip_size_kb": 64, 00:16:27.607 "state": "online", 00:16:27.607 "raid_level": "raid5f", 00:16:27.607 "superblock": true, 00:16:27.607 "num_base_bdevs": 3, 00:16:27.607 "num_base_bdevs_discovered": 2, 00:16:27.607 "num_base_bdevs_operational": 2, 00:16:27.607 "base_bdevs_list": [ 00:16:27.607 { 00:16:27.607 "name": null, 00:16:27.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.607 "is_configured": false, 00:16:27.607 "data_offset": 0, 00:16:27.607 "data_size": 63488 00:16:27.607 }, 00:16:27.607 { 00:16:27.607 "name": "BaseBdev2", 00:16:27.607 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:27.607 "is_configured": true, 00:16:27.607 "data_offset": 2048, 00:16:27.607 "data_size": 63488 00:16:27.607 }, 00:16:27.607 { 00:16:27.607 "name": "BaseBdev3", 00:16:27.607 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:27.607 "is_configured": true, 00:16:27.607 "data_offset": 2048, 00:16:27.607 "data_size": 63488 00:16:27.607 } 00:16:27.607 ] 00:16:27.607 }' 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.607 10:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.172 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:28.172 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.172 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.172 [2024-11-15 10:44:49.114032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:28.172 [2024-11-15 10:44:49.114123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.172 [2024-11-15 10:44:49.114154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:28.172 [2024-11-15 10:44:49.114176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.172 [2024-11-15 10:44:49.114815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.172 [2024-11-15 10:44:49.114860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:28.172 [2024-11-15 10:44:49.114986] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:28.172 [2024-11-15 10:44:49.115021] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:28.172 [2024-11-15 10:44:49.115036] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:28.172 [2024-11-15 10:44:49.115068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.172 [2024-11-15 10:44:49.130386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:28.172 spare 00:16:28.172 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.172 10:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:28.172 [2024-11-15 10:44:49.137894] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.107 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.107 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.107 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.107 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.107 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.107 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.107 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.107 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.107 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.107 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.107 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.107 "name": "raid_bdev1", 00:16:29.107 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:29.107 "strip_size_kb": 64, 00:16:29.107 "state": "online", 00:16:29.107 "raid_level": "raid5f", 00:16:29.107 "superblock": true, 00:16:29.107 "num_base_bdevs": 3, 00:16:29.107 "num_base_bdevs_discovered": 3, 00:16:29.107 "num_base_bdevs_operational": 3, 00:16:29.107 "process": { 00:16:29.107 "type": "rebuild", 00:16:29.107 "target": "spare", 00:16:29.107 "progress": { 00:16:29.107 "blocks": 18432, 00:16:29.107 "percent": 14 00:16:29.107 } 00:16:29.107 }, 00:16:29.107 "base_bdevs_list": [ 00:16:29.107 { 00:16:29.107 "name": "spare", 00:16:29.107 "uuid": "d0abc887-2797-5e2c-850b-b37a980087c5", 00:16:29.107 "is_configured": true, 00:16:29.107 "data_offset": 2048, 00:16:29.107 "data_size": 63488 00:16:29.107 }, 00:16:29.107 { 00:16:29.107 "name": "BaseBdev2", 00:16:29.107 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:29.107 "is_configured": true, 00:16:29.107 "data_offset": 2048, 00:16:29.107 "data_size": 63488 00:16:29.107 }, 00:16:29.107 { 00:16:29.107 "name": "BaseBdev3", 00:16:29.107 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:29.107 "is_configured": true, 00:16:29.107 "data_offset": 2048, 00:16:29.107 "data_size": 63488 00:16:29.107 } 00:16:29.107 ] 00:16:29.107 }' 00:16:29.107 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.107 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.107 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.366 [2024-11-15 10:44:50.308629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.366 [2024-11-15 10:44:50.352909] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:29.366 [2024-11-15 10:44:50.352984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.366 [2024-11-15 10:44:50.353013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.366 [2024-11-15 10:44:50.353025] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.366 "name": "raid_bdev1", 00:16:29.366 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:29.366 "strip_size_kb": 64, 00:16:29.366 "state": "online", 00:16:29.366 "raid_level": "raid5f", 00:16:29.366 "superblock": true, 00:16:29.366 "num_base_bdevs": 3, 00:16:29.366 "num_base_bdevs_discovered": 2, 00:16:29.366 "num_base_bdevs_operational": 2, 00:16:29.366 "base_bdevs_list": [ 00:16:29.366 { 00:16:29.366 "name": null, 00:16:29.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.366 "is_configured": false, 00:16:29.366 "data_offset": 0, 00:16:29.366 "data_size": 63488 00:16:29.366 }, 00:16:29.366 { 00:16:29.366 "name": "BaseBdev2", 00:16:29.366 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:29.366 "is_configured": true, 00:16:29.366 "data_offset": 2048, 00:16:29.366 "data_size": 63488 00:16:29.366 }, 00:16:29.366 { 00:16:29.366 "name": "BaseBdev3", 00:16:29.366 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:29.366 "is_configured": true, 00:16:29.366 "data_offset": 2048, 00:16:29.366 "data_size": 63488 00:16:29.366 } 00:16:29.366 ] 00:16:29.366 }' 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.366 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.933 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.933 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.933 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.933 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.933 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.933 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.933 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.933 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.933 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.933 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.933 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.933 "name": "raid_bdev1", 00:16:29.933 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:29.933 "strip_size_kb": 64, 00:16:29.933 "state": "online", 00:16:29.933 "raid_level": "raid5f", 00:16:29.933 "superblock": true, 00:16:29.933 "num_base_bdevs": 3, 00:16:29.933 "num_base_bdevs_discovered": 2, 00:16:29.933 "num_base_bdevs_operational": 2, 00:16:29.933 "base_bdevs_list": [ 00:16:29.933 { 00:16:29.933 "name": null, 00:16:29.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.933 "is_configured": false, 00:16:29.933 "data_offset": 0, 00:16:29.933 "data_size": 63488 00:16:29.933 }, 00:16:29.933 { 00:16:29.933 "name": "BaseBdev2", 00:16:29.933 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:29.933 "is_configured": true, 00:16:29.933 "data_offset": 2048, 00:16:29.933 "data_size": 63488 00:16:29.933 }, 00:16:29.933 { 00:16:29.933 "name": "BaseBdev3", 00:16:29.933 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:29.933 "is_configured": true, 00:16:29.933 "data_offset": 2048, 00:16:29.933 "data_size": 63488 00:16:29.933 } 00:16:29.933 ] 00:16:29.933 }' 00:16:29.933 10:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.933 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.933 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.192 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:30.192 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:30.192 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.192 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.192 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.192 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:30.192 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.192 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.192 [2024-11-15 10:44:51.112519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:30.192 [2024-11-15 10:44:51.112722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.192 [2024-11-15 10:44:51.112780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:30.192 [2024-11-15 10:44:51.112805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.192 [2024-11-15 10:44:51.113357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.192 [2024-11-15 10:44:51.113393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:30.192 [2024-11-15 10:44:51.113523] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:30.192 [2024-11-15 10:44:51.113546] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:30.192 [2024-11-15 10:44:51.113572] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:30.192 [2024-11-15 10:44:51.113588] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:30.192 BaseBdev1 00:16:30.192 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.192 10:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.130 "name": "raid_bdev1", 00:16:31.130 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:31.130 "strip_size_kb": 64, 00:16:31.130 "state": "online", 00:16:31.130 "raid_level": "raid5f", 00:16:31.130 "superblock": true, 00:16:31.130 "num_base_bdevs": 3, 00:16:31.130 "num_base_bdevs_discovered": 2, 00:16:31.130 "num_base_bdevs_operational": 2, 00:16:31.130 "base_bdevs_list": [ 00:16:31.130 { 00:16:31.130 "name": null, 00:16:31.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.130 "is_configured": false, 00:16:31.130 "data_offset": 0, 00:16:31.130 "data_size": 63488 00:16:31.130 }, 00:16:31.130 { 00:16:31.130 "name": "BaseBdev2", 00:16:31.130 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:31.130 "is_configured": true, 00:16:31.130 "data_offset": 2048, 00:16:31.130 "data_size": 63488 00:16:31.130 }, 00:16:31.130 { 00:16:31.130 "name": "BaseBdev3", 00:16:31.130 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:31.130 "is_configured": true, 00:16:31.130 "data_offset": 2048, 00:16:31.130 "data_size": 63488 00:16:31.130 } 00:16:31.130 ] 00:16:31.130 }' 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.130 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.698 "name": "raid_bdev1", 00:16:31.698 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:31.698 "strip_size_kb": 64, 00:16:31.698 "state": "online", 00:16:31.698 "raid_level": "raid5f", 00:16:31.698 "superblock": true, 00:16:31.698 "num_base_bdevs": 3, 00:16:31.698 "num_base_bdevs_discovered": 2, 00:16:31.698 "num_base_bdevs_operational": 2, 00:16:31.698 "base_bdevs_list": [ 00:16:31.698 { 00:16:31.698 "name": null, 00:16:31.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.698 "is_configured": false, 00:16:31.698 "data_offset": 0, 00:16:31.698 "data_size": 63488 00:16:31.698 }, 00:16:31.698 { 00:16:31.698 "name": "BaseBdev2", 00:16:31.698 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:31.698 "is_configured": true, 00:16:31.698 "data_offset": 2048, 00:16:31.698 "data_size": 63488 00:16:31.698 }, 00:16:31.698 { 00:16:31.698 "name": "BaseBdev3", 00:16:31.698 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:31.698 "is_configured": true, 00:16:31.698 "data_offset": 2048, 00:16:31.698 "data_size": 63488 00:16:31.698 } 00:16:31.698 ] 00:16:31.698 }' 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.698 [2024-11-15 10:44:52.813241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.698 [2024-11-15 10:44:52.813443] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:31.698 [2024-11-15 10:44:52.813465] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:31.698 request: 00:16:31.698 { 00:16:31.698 "base_bdev": "BaseBdev1", 00:16:31.698 "raid_bdev": "raid_bdev1", 00:16:31.698 "method": "bdev_raid_add_base_bdev", 00:16:31.698 "req_id": 1 00:16:31.698 } 00:16:31.698 Got JSON-RPC error response 00:16:31.698 response: 00:16:31.698 { 00:16:31.698 "code": -22, 00:16:31.698 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:31.698 } 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:31.698 10:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:33.077 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:33.077 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.077 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.078 "name": "raid_bdev1", 00:16:33.078 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:33.078 "strip_size_kb": 64, 00:16:33.078 "state": "online", 00:16:33.078 "raid_level": "raid5f", 00:16:33.078 "superblock": true, 00:16:33.078 "num_base_bdevs": 3, 00:16:33.078 "num_base_bdevs_discovered": 2, 00:16:33.078 "num_base_bdevs_operational": 2, 00:16:33.078 "base_bdevs_list": [ 00:16:33.078 { 00:16:33.078 "name": null, 00:16:33.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.078 "is_configured": false, 00:16:33.078 "data_offset": 0, 00:16:33.078 "data_size": 63488 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "name": "BaseBdev2", 00:16:33.078 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:33.078 "is_configured": true, 00:16:33.078 "data_offset": 2048, 00:16:33.078 "data_size": 63488 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "name": "BaseBdev3", 00:16:33.078 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:33.078 "is_configured": true, 00:16:33.078 "data_offset": 2048, 00:16:33.078 "data_size": 63488 00:16:33.078 } 00:16:33.078 ] 00:16:33.078 }' 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.078 10:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.337 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.337 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.337 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.337 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.337 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.337 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.337 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.337 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.338 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.338 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.338 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.338 "name": "raid_bdev1", 00:16:33.338 "uuid": "d074ef6a-d3a4-444a-843a-2d79e5531ddb", 00:16:33.338 "strip_size_kb": 64, 00:16:33.338 "state": "online", 00:16:33.338 "raid_level": "raid5f", 00:16:33.338 "superblock": true, 00:16:33.338 "num_base_bdevs": 3, 00:16:33.338 "num_base_bdevs_discovered": 2, 00:16:33.338 "num_base_bdevs_operational": 2, 00:16:33.338 "base_bdevs_list": [ 00:16:33.338 { 00:16:33.338 "name": null, 00:16:33.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.338 "is_configured": false, 00:16:33.338 "data_offset": 0, 00:16:33.338 "data_size": 63488 00:16:33.338 }, 00:16:33.338 { 00:16:33.338 "name": "BaseBdev2", 00:16:33.338 "uuid": "67a5c606-7842-50a4-bd5c-f582ea51274a", 00:16:33.338 "is_configured": true, 00:16:33.338 "data_offset": 2048, 00:16:33.338 "data_size": 63488 00:16:33.338 }, 00:16:33.338 { 00:16:33.338 "name": "BaseBdev3", 00:16:33.338 "uuid": "0b190bef-0ae5-5f55-a80b-c1896efcf29f", 00:16:33.338 "is_configured": true, 00:16:33.338 "data_offset": 2048, 00:16:33.338 "data_size": 63488 00:16:33.338 } 00:16:33.338 ] 00:16:33.338 }' 00:16:33.338 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.338 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.338 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.338 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.338 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82355 00:16:33.338 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82355 ']' 00:16:33.338 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82355 00:16:33.338 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:33.338 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.338 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82355 00:16:33.597 killing process with pid 82355 00:16:33.597 Received shutdown signal, test time was about 60.000000 seconds 00:16:33.597 00:16:33.597 Latency(us) 00:16:33.597 [2024-11-15T10:44:54.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.597 [2024-11-15T10:44:54.759Z] =================================================================================================================== 00:16:33.597 [2024-11-15T10:44:54.759Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:33.597 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.597 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.597 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82355' 00:16:33.597 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82355 00:16:33.597 [2024-11-15 10:44:54.505389] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.597 10:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82355 00:16:33.597 [2024-11-15 10:44:54.505561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.597 [2024-11-15 10:44:54.505664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.597 [2024-11-15 10:44:54.505693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:33.855 [2024-11-15 10:44:54.850786] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.791 10:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:34.791 00:16:34.791 real 0m24.628s 00:16:34.791 user 0m32.808s 00:16:34.791 sys 0m2.453s 00:16:34.791 10:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.791 10:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.791 ************************************ 00:16:34.791 END TEST raid5f_rebuild_test_sb 00:16:34.791 ************************************ 00:16:34.791 10:44:55 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:34.791 10:44:55 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:34.791 10:44:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:34.791 10:44:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.791 10:44:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.791 ************************************ 00:16:34.791 START TEST raid5f_state_function_test 00:16:34.791 ************************************ 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:34.791 Process raid pid: 83114 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83114 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83114' 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83114 00:16:34.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83114 ']' 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.791 10:44:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.050 [2024-11-15 10:44:56.047607] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:16:35.050 [2024-11-15 10:44:56.047822] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.309 [2024-11-15 10:44:56.234704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.309 [2024-11-15 10:44:56.368602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.568 [2024-11-15 10:44:56.577617] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.568 [2024-11-15 10:44:56.577661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.137 [2024-11-15 10:44:57.044312] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.137 [2024-11-15 10:44:57.044402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.137 [2024-11-15 10:44:57.044420] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.137 [2024-11-15 10:44:57.044437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.137 [2024-11-15 10:44:57.044447] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.137 [2024-11-15 10:44:57.044461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.137 [2024-11-15 10:44:57.044471] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:36.137 [2024-11-15 10:44:57.044484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.137 "name": "Existed_Raid", 00:16:36.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.137 "strip_size_kb": 64, 00:16:36.137 "state": "configuring", 00:16:36.137 "raid_level": "raid5f", 00:16:36.137 "superblock": false, 00:16:36.137 "num_base_bdevs": 4, 00:16:36.137 "num_base_bdevs_discovered": 0, 00:16:36.137 "num_base_bdevs_operational": 4, 00:16:36.137 "base_bdevs_list": [ 00:16:36.137 { 00:16:36.137 "name": "BaseBdev1", 00:16:36.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.137 "is_configured": false, 00:16:36.137 "data_offset": 0, 00:16:36.137 "data_size": 0 00:16:36.137 }, 00:16:36.137 { 00:16:36.137 "name": "BaseBdev2", 00:16:36.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.137 "is_configured": false, 00:16:36.137 "data_offset": 0, 00:16:36.137 "data_size": 0 00:16:36.137 }, 00:16:36.137 { 00:16:36.137 "name": "BaseBdev3", 00:16:36.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.137 "is_configured": false, 00:16:36.137 "data_offset": 0, 00:16:36.137 "data_size": 0 00:16:36.137 }, 00:16:36.137 { 00:16:36.137 "name": "BaseBdev4", 00:16:36.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.137 "is_configured": false, 00:16:36.137 "data_offset": 0, 00:16:36.137 "data_size": 0 00:16:36.137 } 00:16:36.137 ] 00:16:36.137 }' 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.137 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.395 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:36.396 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.396 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.654 [2024-11-15 10:44:57.556431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.654 [2024-11-15 10:44:57.556641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.654 [2024-11-15 10:44:57.564395] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.654 [2024-11-15 10:44:57.564601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.654 [2024-11-15 10:44:57.564739] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.654 [2024-11-15 10:44:57.564800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.654 [2024-11-15 10:44:57.564997] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.654 [2024-11-15 10:44:57.565057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.654 [2024-11-15 10:44:57.565262] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:36.654 [2024-11-15 10:44:57.565295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.654 [2024-11-15 10:44:57.608455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.654 BaseBdev1 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.654 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.655 [ 00:16:36.655 { 00:16:36.655 "name": "BaseBdev1", 00:16:36.655 "aliases": [ 00:16:36.655 "db027323-3dc3-4cc0-992b-9c405d82233e" 00:16:36.655 ], 00:16:36.655 "product_name": "Malloc disk", 00:16:36.655 "block_size": 512, 00:16:36.655 "num_blocks": 65536, 00:16:36.655 "uuid": "db027323-3dc3-4cc0-992b-9c405d82233e", 00:16:36.655 "assigned_rate_limits": { 00:16:36.655 "rw_ios_per_sec": 0, 00:16:36.655 "rw_mbytes_per_sec": 0, 00:16:36.655 "r_mbytes_per_sec": 0, 00:16:36.655 "w_mbytes_per_sec": 0 00:16:36.655 }, 00:16:36.655 "claimed": true, 00:16:36.655 "claim_type": "exclusive_write", 00:16:36.655 "zoned": false, 00:16:36.655 "supported_io_types": { 00:16:36.655 "read": true, 00:16:36.655 "write": true, 00:16:36.655 "unmap": true, 00:16:36.655 "flush": true, 00:16:36.655 "reset": true, 00:16:36.655 "nvme_admin": false, 00:16:36.655 "nvme_io": false, 00:16:36.655 "nvme_io_md": false, 00:16:36.655 "write_zeroes": true, 00:16:36.655 "zcopy": true, 00:16:36.655 "get_zone_info": false, 00:16:36.655 "zone_management": false, 00:16:36.655 "zone_append": false, 00:16:36.655 "compare": false, 00:16:36.655 "compare_and_write": false, 00:16:36.655 "abort": true, 00:16:36.655 "seek_hole": false, 00:16:36.655 "seek_data": false, 00:16:36.655 "copy": true, 00:16:36.655 "nvme_iov_md": false 00:16:36.655 }, 00:16:36.655 "memory_domains": [ 00:16:36.655 { 00:16:36.655 "dma_device_id": "system", 00:16:36.655 "dma_device_type": 1 00:16:36.655 }, 00:16:36.655 { 00:16:36.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.655 "dma_device_type": 2 00:16:36.655 } 00:16:36.655 ], 00:16:36.655 "driver_specific": {} 00:16:36.655 } 00:16:36.655 ] 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.655 "name": "Existed_Raid", 00:16:36.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.655 "strip_size_kb": 64, 00:16:36.655 "state": "configuring", 00:16:36.655 "raid_level": "raid5f", 00:16:36.655 "superblock": false, 00:16:36.655 "num_base_bdevs": 4, 00:16:36.655 "num_base_bdevs_discovered": 1, 00:16:36.655 "num_base_bdevs_operational": 4, 00:16:36.655 "base_bdevs_list": [ 00:16:36.655 { 00:16:36.655 "name": "BaseBdev1", 00:16:36.655 "uuid": "db027323-3dc3-4cc0-992b-9c405d82233e", 00:16:36.655 "is_configured": true, 00:16:36.655 "data_offset": 0, 00:16:36.655 "data_size": 65536 00:16:36.655 }, 00:16:36.655 { 00:16:36.655 "name": "BaseBdev2", 00:16:36.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.655 "is_configured": false, 00:16:36.655 "data_offset": 0, 00:16:36.655 "data_size": 0 00:16:36.655 }, 00:16:36.655 { 00:16:36.655 "name": "BaseBdev3", 00:16:36.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.655 "is_configured": false, 00:16:36.655 "data_offset": 0, 00:16:36.655 "data_size": 0 00:16:36.655 }, 00:16:36.655 { 00:16:36.655 "name": "BaseBdev4", 00:16:36.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.655 "is_configured": false, 00:16:36.655 "data_offset": 0, 00:16:36.655 "data_size": 0 00:16:36.655 } 00:16:36.655 ] 00:16:36.655 }' 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.655 10:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.222 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.223 [2024-11-15 10:44:58.152734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.223 [2024-11-15 10:44:58.152797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.223 [2024-11-15 10:44:58.160764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.223 [2024-11-15 10:44:58.163181] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:37.223 [2024-11-15 10:44:58.163236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:37.223 [2024-11-15 10:44:58.163254] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:37.223 [2024-11-15 10:44:58.163272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:37.223 [2024-11-15 10:44:58.163282] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:37.223 [2024-11-15 10:44:58.163296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.223 "name": "Existed_Raid", 00:16:37.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.223 "strip_size_kb": 64, 00:16:37.223 "state": "configuring", 00:16:37.223 "raid_level": "raid5f", 00:16:37.223 "superblock": false, 00:16:37.223 "num_base_bdevs": 4, 00:16:37.223 "num_base_bdevs_discovered": 1, 00:16:37.223 "num_base_bdevs_operational": 4, 00:16:37.223 "base_bdevs_list": [ 00:16:37.223 { 00:16:37.223 "name": "BaseBdev1", 00:16:37.223 "uuid": "db027323-3dc3-4cc0-992b-9c405d82233e", 00:16:37.223 "is_configured": true, 00:16:37.223 "data_offset": 0, 00:16:37.223 "data_size": 65536 00:16:37.223 }, 00:16:37.223 { 00:16:37.223 "name": "BaseBdev2", 00:16:37.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.223 "is_configured": false, 00:16:37.223 "data_offset": 0, 00:16:37.223 "data_size": 0 00:16:37.223 }, 00:16:37.223 { 00:16:37.223 "name": "BaseBdev3", 00:16:37.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.223 "is_configured": false, 00:16:37.223 "data_offset": 0, 00:16:37.223 "data_size": 0 00:16:37.223 }, 00:16:37.223 { 00:16:37.223 "name": "BaseBdev4", 00:16:37.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.223 "is_configured": false, 00:16:37.223 "data_offset": 0, 00:16:37.223 "data_size": 0 00:16:37.223 } 00:16:37.223 ] 00:16:37.223 }' 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.223 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.791 [2024-11-15 10:44:58.728340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.791 BaseBdev2 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.791 [ 00:16:37.791 { 00:16:37.791 "name": "BaseBdev2", 00:16:37.791 "aliases": [ 00:16:37.791 "28d66c68-4f31-4301-881f-a39660f8cedf" 00:16:37.791 ], 00:16:37.791 "product_name": "Malloc disk", 00:16:37.791 "block_size": 512, 00:16:37.791 "num_blocks": 65536, 00:16:37.791 "uuid": "28d66c68-4f31-4301-881f-a39660f8cedf", 00:16:37.791 "assigned_rate_limits": { 00:16:37.791 "rw_ios_per_sec": 0, 00:16:37.791 "rw_mbytes_per_sec": 0, 00:16:37.791 "r_mbytes_per_sec": 0, 00:16:37.791 "w_mbytes_per_sec": 0 00:16:37.791 }, 00:16:37.791 "claimed": true, 00:16:37.791 "claim_type": "exclusive_write", 00:16:37.791 "zoned": false, 00:16:37.791 "supported_io_types": { 00:16:37.791 "read": true, 00:16:37.791 "write": true, 00:16:37.791 "unmap": true, 00:16:37.791 "flush": true, 00:16:37.791 "reset": true, 00:16:37.791 "nvme_admin": false, 00:16:37.791 "nvme_io": false, 00:16:37.791 "nvme_io_md": false, 00:16:37.791 "write_zeroes": true, 00:16:37.791 "zcopy": true, 00:16:37.791 "get_zone_info": false, 00:16:37.791 "zone_management": false, 00:16:37.791 "zone_append": false, 00:16:37.791 "compare": false, 00:16:37.791 "compare_and_write": false, 00:16:37.791 "abort": true, 00:16:37.791 "seek_hole": false, 00:16:37.791 "seek_data": false, 00:16:37.791 "copy": true, 00:16:37.791 "nvme_iov_md": false 00:16:37.791 }, 00:16:37.791 "memory_domains": [ 00:16:37.791 { 00:16:37.791 "dma_device_id": "system", 00:16:37.791 "dma_device_type": 1 00:16:37.791 }, 00:16:37.791 { 00:16:37.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.791 "dma_device_type": 2 00:16:37.791 } 00:16:37.791 ], 00:16:37.791 "driver_specific": {} 00:16:37.791 } 00:16:37.791 ] 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.791 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.792 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.792 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.792 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.792 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.792 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.792 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.792 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.792 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.792 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.792 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.792 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.792 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.792 "name": "Existed_Raid", 00:16:37.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.792 "strip_size_kb": 64, 00:16:37.792 "state": "configuring", 00:16:37.792 "raid_level": "raid5f", 00:16:37.792 "superblock": false, 00:16:37.792 "num_base_bdevs": 4, 00:16:37.792 "num_base_bdevs_discovered": 2, 00:16:37.792 "num_base_bdevs_operational": 4, 00:16:37.792 "base_bdevs_list": [ 00:16:37.792 { 00:16:37.792 "name": "BaseBdev1", 00:16:37.792 "uuid": "db027323-3dc3-4cc0-992b-9c405d82233e", 00:16:37.792 "is_configured": true, 00:16:37.792 "data_offset": 0, 00:16:37.792 "data_size": 65536 00:16:37.792 }, 00:16:37.792 { 00:16:37.792 "name": "BaseBdev2", 00:16:37.792 "uuid": "28d66c68-4f31-4301-881f-a39660f8cedf", 00:16:37.792 "is_configured": true, 00:16:37.792 "data_offset": 0, 00:16:37.792 "data_size": 65536 00:16:37.792 }, 00:16:37.792 { 00:16:37.792 "name": "BaseBdev3", 00:16:37.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.792 "is_configured": false, 00:16:37.792 "data_offset": 0, 00:16:37.792 "data_size": 0 00:16:37.792 }, 00:16:37.792 { 00:16:37.792 "name": "BaseBdev4", 00:16:37.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.792 "is_configured": false, 00:16:37.792 "data_offset": 0, 00:16:37.792 "data_size": 0 00:16:37.792 } 00:16:37.792 ] 00:16:37.792 }' 00:16:37.792 10:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.792 10:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.360 [2024-11-15 10:44:59.318111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.360 BaseBdev3 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.360 [ 00:16:38.360 { 00:16:38.360 "name": "BaseBdev3", 00:16:38.360 "aliases": [ 00:16:38.360 "a73be94d-813a-4aa7-a583-88b436e809c3" 00:16:38.360 ], 00:16:38.360 "product_name": "Malloc disk", 00:16:38.360 "block_size": 512, 00:16:38.360 "num_blocks": 65536, 00:16:38.360 "uuid": "a73be94d-813a-4aa7-a583-88b436e809c3", 00:16:38.360 "assigned_rate_limits": { 00:16:38.360 "rw_ios_per_sec": 0, 00:16:38.360 "rw_mbytes_per_sec": 0, 00:16:38.360 "r_mbytes_per_sec": 0, 00:16:38.360 "w_mbytes_per_sec": 0 00:16:38.360 }, 00:16:38.360 "claimed": true, 00:16:38.360 "claim_type": "exclusive_write", 00:16:38.360 "zoned": false, 00:16:38.360 "supported_io_types": { 00:16:38.360 "read": true, 00:16:38.360 "write": true, 00:16:38.360 "unmap": true, 00:16:38.360 "flush": true, 00:16:38.360 "reset": true, 00:16:38.360 "nvme_admin": false, 00:16:38.360 "nvme_io": false, 00:16:38.360 "nvme_io_md": false, 00:16:38.360 "write_zeroes": true, 00:16:38.360 "zcopy": true, 00:16:38.360 "get_zone_info": false, 00:16:38.360 "zone_management": false, 00:16:38.360 "zone_append": false, 00:16:38.360 "compare": false, 00:16:38.360 "compare_and_write": false, 00:16:38.360 "abort": true, 00:16:38.360 "seek_hole": false, 00:16:38.360 "seek_data": false, 00:16:38.360 "copy": true, 00:16:38.360 "nvme_iov_md": false 00:16:38.360 }, 00:16:38.360 "memory_domains": [ 00:16:38.360 { 00:16:38.360 "dma_device_id": "system", 00:16:38.360 "dma_device_type": 1 00:16:38.360 }, 00:16:38.360 { 00:16:38.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.360 "dma_device_type": 2 00:16:38.360 } 00:16:38.360 ], 00:16:38.360 "driver_specific": {} 00:16:38.360 } 00:16:38.360 ] 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.360 "name": "Existed_Raid", 00:16:38.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.360 "strip_size_kb": 64, 00:16:38.360 "state": "configuring", 00:16:38.360 "raid_level": "raid5f", 00:16:38.360 "superblock": false, 00:16:38.360 "num_base_bdevs": 4, 00:16:38.360 "num_base_bdevs_discovered": 3, 00:16:38.360 "num_base_bdevs_operational": 4, 00:16:38.360 "base_bdevs_list": [ 00:16:38.360 { 00:16:38.360 "name": "BaseBdev1", 00:16:38.360 "uuid": "db027323-3dc3-4cc0-992b-9c405d82233e", 00:16:38.360 "is_configured": true, 00:16:38.360 "data_offset": 0, 00:16:38.360 "data_size": 65536 00:16:38.360 }, 00:16:38.360 { 00:16:38.360 "name": "BaseBdev2", 00:16:38.360 "uuid": "28d66c68-4f31-4301-881f-a39660f8cedf", 00:16:38.360 "is_configured": true, 00:16:38.360 "data_offset": 0, 00:16:38.360 "data_size": 65536 00:16:38.360 }, 00:16:38.360 { 00:16:38.360 "name": "BaseBdev3", 00:16:38.360 "uuid": "a73be94d-813a-4aa7-a583-88b436e809c3", 00:16:38.360 "is_configured": true, 00:16:38.360 "data_offset": 0, 00:16:38.360 "data_size": 65536 00:16:38.360 }, 00:16:38.360 { 00:16:38.360 "name": "BaseBdev4", 00:16:38.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.360 "is_configured": false, 00:16:38.360 "data_offset": 0, 00:16:38.360 "data_size": 0 00:16:38.360 } 00:16:38.360 ] 00:16:38.360 }' 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.360 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.929 [2024-11-15 10:44:59.884440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:38.929 [2024-11-15 10:44:59.884794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:38.929 [2024-11-15 10:44:59.884819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:38.929 [2024-11-15 10:44:59.885215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:38.929 [2024-11-15 10:44:59.892004] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:38.929 [2024-11-15 10:44:59.892213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:38.929 [2024-11-15 10:44:59.892589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.929 BaseBdev4 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.929 [ 00:16:38.929 { 00:16:38.929 "name": "BaseBdev4", 00:16:38.929 "aliases": [ 00:16:38.929 "e47f22ac-5a56-4f07-9ca8-89ab10c5899f" 00:16:38.929 ], 00:16:38.929 "product_name": "Malloc disk", 00:16:38.929 "block_size": 512, 00:16:38.929 "num_blocks": 65536, 00:16:38.929 "uuid": "e47f22ac-5a56-4f07-9ca8-89ab10c5899f", 00:16:38.929 "assigned_rate_limits": { 00:16:38.929 "rw_ios_per_sec": 0, 00:16:38.929 "rw_mbytes_per_sec": 0, 00:16:38.929 "r_mbytes_per_sec": 0, 00:16:38.929 "w_mbytes_per_sec": 0 00:16:38.929 }, 00:16:38.929 "claimed": true, 00:16:38.929 "claim_type": "exclusive_write", 00:16:38.929 "zoned": false, 00:16:38.929 "supported_io_types": { 00:16:38.929 "read": true, 00:16:38.929 "write": true, 00:16:38.929 "unmap": true, 00:16:38.929 "flush": true, 00:16:38.929 "reset": true, 00:16:38.929 "nvme_admin": false, 00:16:38.929 "nvme_io": false, 00:16:38.929 "nvme_io_md": false, 00:16:38.929 "write_zeroes": true, 00:16:38.929 "zcopy": true, 00:16:38.929 "get_zone_info": false, 00:16:38.929 "zone_management": false, 00:16:38.929 "zone_append": false, 00:16:38.929 "compare": false, 00:16:38.929 "compare_and_write": false, 00:16:38.929 "abort": true, 00:16:38.929 "seek_hole": false, 00:16:38.929 "seek_data": false, 00:16:38.929 "copy": true, 00:16:38.929 "nvme_iov_md": false 00:16:38.929 }, 00:16:38.929 "memory_domains": [ 00:16:38.929 { 00:16:38.929 "dma_device_id": "system", 00:16:38.929 "dma_device_type": 1 00:16:38.929 }, 00:16:38.929 { 00:16:38.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.929 "dma_device_type": 2 00:16:38.929 } 00:16:38.929 ], 00:16:38.929 "driver_specific": {} 00:16:38.929 } 00:16:38.929 ] 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.929 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.929 "name": "Existed_Raid", 00:16:38.929 "uuid": "e2e64e5e-903e-4175-af93-7859f6c4ab25", 00:16:38.929 "strip_size_kb": 64, 00:16:38.929 "state": "online", 00:16:38.929 "raid_level": "raid5f", 00:16:38.929 "superblock": false, 00:16:38.929 "num_base_bdevs": 4, 00:16:38.929 "num_base_bdevs_discovered": 4, 00:16:38.929 "num_base_bdevs_operational": 4, 00:16:38.929 "base_bdevs_list": [ 00:16:38.929 { 00:16:38.929 "name": "BaseBdev1", 00:16:38.929 "uuid": "db027323-3dc3-4cc0-992b-9c405d82233e", 00:16:38.929 "is_configured": true, 00:16:38.929 "data_offset": 0, 00:16:38.929 "data_size": 65536 00:16:38.929 }, 00:16:38.929 { 00:16:38.929 "name": "BaseBdev2", 00:16:38.929 "uuid": "28d66c68-4f31-4301-881f-a39660f8cedf", 00:16:38.929 "is_configured": true, 00:16:38.929 "data_offset": 0, 00:16:38.929 "data_size": 65536 00:16:38.929 }, 00:16:38.929 { 00:16:38.929 "name": "BaseBdev3", 00:16:38.929 "uuid": "a73be94d-813a-4aa7-a583-88b436e809c3", 00:16:38.929 "is_configured": true, 00:16:38.929 "data_offset": 0, 00:16:38.929 "data_size": 65536 00:16:38.929 }, 00:16:38.929 { 00:16:38.929 "name": "BaseBdev4", 00:16:38.929 "uuid": "e47f22ac-5a56-4f07-9ca8-89ab10c5899f", 00:16:38.929 "is_configured": true, 00:16:38.930 "data_offset": 0, 00:16:38.930 "data_size": 65536 00:16:38.930 } 00:16:38.930 ] 00:16:38.930 }' 00:16:38.930 10:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.930 10:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.497 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:39.497 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:39.497 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:39.497 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:39.497 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:39.497 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:39.497 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.498 [2024-11-15 10:45:00.452398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:39.498 "name": "Existed_Raid", 00:16:39.498 "aliases": [ 00:16:39.498 "e2e64e5e-903e-4175-af93-7859f6c4ab25" 00:16:39.498 ], 00:16:39.498 "product_name": "Raid Volume", 00:16:39.498 "block_size": 512, 00:16:39.498 "num_blocks": 196608, 00:16:39.498 "uuid": "e2e64e5e-903e-4175-af93-7859f6c4ab25", 00:16:39.498 "assigned_rate_limits": { 00:16:39.498 "rw_ios_per_sec": 0, 00:16:39.498 "rw_mbytes_per_sec": 0, 00:16:39.498 "r_mbytes_per_sec": 0, 00:16:39.498 "w_mbytes_per_sec": 0 00:16:39.498 }, 00:16:39.498 "claimed": false, 00:16:39.498 "zoned": false, 00:16:39.498 "supported_io_types": { 00:16:39.498 "read": true, 00:16:39.498 "write": true, 00:16:39.498 "unmap": false, 00:16:39.498 "flush": false, 00:16:39.498 "reset": true, 00:16:39.498 "nvme_admin": false, 00:16:39.498 "nvme_io": false, 00:16:39.498 "nvme_io_md": false, 00:16:39.498 "write_zeroes": true, 00:16:39.498 "zcopy": false, 00:16:39.498 "get_zone_info": false, 00:16:39.498 "zone_management": false, 00:16:39.498 "zone_append": false, 00:16:39.498 "compare": false, 00:16:39.498 "compare_and_write": false, 00:16:39.498 "abort": false, 00:16:39.498 "seek_hole": false, 00:16:39.498 "seek_data": false, 00:16:39.498 "copy": false, 00:16:39.498 "nvme_iov_md": false 00:16:39.498 }, 00:16:39.498 "driver_specific": { 00:16:39.498 "raid": { 00:16:39.498 "uuid": "e2e64e5e-903e-4175-af93-7859f6c4ab25", 00:16:39.498 "strip_size_kb": 64, 00:16:39.498 "state": "online", 00:16:39.498 "raid_level": "raid5f", 00:16:39.498 "superblock": false, 00:16:39.498 "num_base_bdevs": 4, 00:16:39.498 "num_base_bdevs_discovered": 4, 00:16:39.498 "num_base_bdevs_operational": 4, 00:16:39.498 "base_bdevs_list": [ 00:16:39.498 { 00:16:39.498 "name": "BaseBdev1", 00:16:39.498 "uuid": "db027323-3dc3-4cc0-992b-9c405d82233e", 00:16:39.498 "is_configured": true, 00:16:39.498 "data_offset": 0, 00:16:39.498 "data_size": 65536 00:16:39.498 }, 00:16:39.498 { 00:16:39.498 "name": "BaseBdev2", 00:16:39.498 "uuid": "28d66c68-4f31-4301-881f-a39660f8cedf", 00:16:39.498 "is_configured": true, 00:16:39.498 "data_offset": 0, 00:16:39.498 "data_size": 65536 00:16:39.498 }, 00:16:39.498 { 00:16:39.498 "name": "BaseBdev3", 00:16:39.498 "uuid": "a73be94d-813a-4aa7-a583-88b436e809c3", 00:16:39.498 "is_configured": true, 00:16:39.498 "data_offset": 0, 00:16:39.498 "data_size": 65536 00:16:39.498 }, 00:16:39.498 { 00:16:39.498 "name": "BaseBdev4", 00:16:39.498 "uuid": "e47f22ac-5a56-4f07-9ca8-89ab10c5899f", 00:16:39.498 "is_configured": true, 00:16:39.498 "data_offset": 0, 00:16:39.498 "data_size": 65536 00:16:39.498 } 00:16:39.498 ] 00:16:39.498 } 00:16:39.498 } 00:16:39.498 }' 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:39.498 BaseBdev2 00:16:39.498 BaseBdev3 00:16:39.498 BaseBdev4' 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.498 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.757 [2024-11-15 10:45:00.828292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.757 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.016 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.016 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.016 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.016 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.016 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.016 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.016 "name": "Existed_Raid", 00:16:40.016 "uuid": "e2e64e5e-903e-4175-af93-7859f6c4ab25", 00:16:40.016 "strip_size_kb": 64, 00:16:40.016 "state": "online", 00:16:40.016 "raid_level": "raid5f", 00:16:40.016 "superblock": false, 00:16:40.016 "num_base_bdevs": 4, 00:16:40.016 "num_base_bdevs_discovered": 3, 00:16:40.016 "num_base_bdevs_operational": 3, 00:16:40.016 "base_bdevs_list": [ 00:16:40.016 { 00:16:40.016 "name": null, 00:16:40.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.016 "is_configured": false, 00:16:40.016 "data_offset": 0, 00:16:40.016 "data_size": 65536 00:16:40.016 }, 00:16:40.016 { 00:16:40.016 "name": "BaseBdev2", 00:16:40.016 "uuid": "28d66c68-4f31-4301-881f-a39660f8cedf", 00:16:40.016 "is_configured": true, 00:16:40.016 "data_offset": 0, 00:16:40.016 "data_size": 65536 00:16:40.016 }, 00:16:40.016 { 00:16:40.016 "name": "BaseBdev3", 00:16:40.016 "uuid": "a73be94d-813a-4aa7-a583-88b436e809c3", 00:16:40.016 "is_configured": true, 00:16:40.016 "data_offset": 0, 00:16:40.016 "data_size": 65536 00:16:40.016 }, 00:16:40.016 { 00:16:40.016 "name": "BaseBdev4", 00:16:40.016 "uuid": "e47f22ac-5a56-4f07-9ca8-89ab10c5899f", 00:16:40.016 "is_configured": true, 00:16:40.016 "data_offset": 0, 00:16:40.016 "data_size": 65536 00:16:40.016 } 00:16:40.016 ] 00:16:40.016 }' 00:16:40.016 10:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.016 10:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.274 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:40.274 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.274 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.274 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:40.274 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.274 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 [2024-11-15 10:45:01.482855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.532 [2024-11-15 10:45:01.483005] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.532 [2024-11-15 10:45:01.568952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.532 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 [2024-11-15 10:45:01.625063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.791 [2024-11-15 10:45:01.764376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:40.791 [2024-11-15 10:45:01.764574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.791 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:40.792 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.792 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.792 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.792 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.792 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:40.792 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.792 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:40.792 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:40.792 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:40.792 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:40.792 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:40.792 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:40.792 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.792 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.051 BaseBdev2 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.051 [ 00:16:41.051 { 00:16:41.051 "name": "BaseBdev2", 00:16:41.051 "aliases": [ 00:16:41.051 "89c90f65-13e0-4a8a-915b-bf980243bc76" 00:16:41.051 ], 00:16:41.051 "product_name": "Malloc disk", 00:16:41.051 "block_size": 512, 00:16:41.051 "num_blocks": 65536, 00:16:41.051 "uuid": "89c90f65-13e0-4a8a-915b-bf980243bc76", 00:16:41.051 "assigned_rate_limits": { 00:16:41.051 "rw_ios_per_sec": 0, 00:16:41.051 "rw_mbytes_per_sec": 0, 00:16:41.051 "r_mbytes_per_sec": 0, 00:16:41.051 "w_mbytes_per_sec": 0 00:16:41.051 }, 00:16:41.051 "claimed": false, 00:16:41.051 "zoned": false, 00:16:41.051 "supported_io_types": { 00:16:41.051 "read": true, 00:16:41.051 "write": true, 00:16:41.051 "unmap": true, 00:16:41.051 "flush": true, 00:16:41.051 "reset": true, 00:16:41.051 "nvme_admin": false, 00:16:41.051 "nvme_io": false, 00:16:41.051 "nvme_io_md": false, 00:16:41.051 "write_zeroes": true, 00:16:41.051 "zcopy": true, 00:16:41.051 "get_zone_info": false, 00:16:41.051 "zone_management": false, 00:16:41.051 "zone_append": false, 00:16:41.051 "compare": false, 00:16:41.051 "compare_and_write": false, 00:16:41.051 "abort": true, 00:16:41.051 "seek_hole": false, 00:16:41.051 "seek_data": false, 00:16:41.051 "copy": true, 00:16:41.051 "nvme_iov_md": false 00:16:41.051 }, 00:16:41.051 "memory_domains": [ 00:16:41.051 { 00:16:41.051 "dma_device_id": "system", 00:16:41.051 "dma_device_type": 1 00:16:41.051 }, 00:16:41.051 { 00:16:41.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.051 "dma_device_type": 2 00:16:41.051 } 00:16:41.051 ], 00:16:41.051 "driver_specific": {} 00:16:41.051 } 00:16:41.051 ] 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.051 10:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.051 BaseBdev3 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.051 [ 00:16:41.051 { 00:16:41.051 "name": "BaseBdev3", 00:16:41.051 "aliases": [ 00:16:41.051 "90ed3979-fbb2-4ae9-a513-755bedf63704" 00:16:41.051 ], 00:16:41.051 "product_name": "Malloc disk", 00:16:41.051 "block_size": 512, 00:16:41.051 "num_blocks": 65536, 00:16:41.051 "uuid": "90ed3979-fbb2-4ae9-a513-755bedf63704", 00:16:41.051 "assigned_rate_limits": { 00:16:41.051 "rw_ios_per_sec": 0, 00:16:41.051 "rw_mbytes_per_sec": 0, 00:16:41.051 "r_mbytes_per_sec": 0, 00:16:41.051 "w_mbytes_per_sec": 0 00:16:41.051 }, 00:16:41.051 "claimed": false, 00:16:41.051 "zoned": false, 00:16:41.051 "supported_io_types": { 00:16:41.051 "read": true, 00:16:41.051 "write": true, 00:16:41.051 "unmap": true, 00:16:41.051 "flush": true, 00:16:41.051 "reset": true, 00:16:41.051 "nvme_admin": false, 00:16:41.051 "nvme_io": false, 00:16:41.051 "nvme_io_md": false, 00:16:41.051 "write_zeroes": true, 00:16:41.051 "zcopy": true, 00:16:41.051 "get_zone_info": false, 00:16:41.051 "zone_management": false, 00:16:41.051 "zone_append": false, 00:16:41.051 "compare": false, 00:16:41.051 "compare_and_write": false, 00:16:41.051 "abort": true, 00:16:41.051 "seek_hole": false, 00:16:41.051 "seek_data": false, 00:16:41.051 "copy": true, 00:16:41.051 "nvme_iov_md": false 00:16:41.051 }, 00:16:41.051 "memory_domains": [ 00:16:41.051 { 00:16:41.051 "dma_device_id": "system", 00:16:41.051 "dma_device_type": 1 00:16:41.051 }, 00:16:41.051 { 00:16:41.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.051 "dma_device_type": 2 00:16:41.051 } 00:16:41.051 ], 00:16:41.051 "driver_specific": {} 00:16:41.051 } 00:16:41.051 ] 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.051 BaseBdev4 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.051 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.052 [ 00:16:41.052 { 00:16:41.052 "name": "BaseBdev4", 00:16:41.052 "aliases": [ 00:16:41.052 "48aa2473-5e3a-4117-9787-dcc02b52cb65" 00:16:41.052 ], 00:16:41.052 "product_name": "Malloc disk", 00:16:41.052 "block_size": 512, 00:16:41.052 "num_blocks": 65536, 00:16:41.052 "uuid": "48aa2473-5e3a-4117-9787-dcc02b52cb65", 00:16:41.052 "assigned_rate_limits": { 00:16:41.052 "rw_ios_per_sec": 0, 00:16:41.052 "rw_mbytes_per_sec": 0, 00:16:41.052 "r_mbytes_per_sec": 0, 00:16:41.052 "w_mbytes_per_sec": 0 00:16:41.052 }, 00:16:41.052 "claimed": false, 00:16:41.052 "zoned": false, 00:16:41.052 "supported_io_types": { 00:16:41.052 "read": true, 00:16:41.052 "write": true, 00:16:41.052 "unmap": true, 00:16:41.052 "flush": true, 00:16:41.052 "reset": true, 00:16:41.052 "nvme_admin": false, 00:16:41.052 "nvme_io": false, 00:16:41.052 "nvme_io_md": false, 00:16:41.052 "write_zeroes": true, 00:16:41.052 "zcopy": true, 00:16:41.052 "get_zone_info": false, 00:16:41.052 "zone_management": false, 00:16:41.052 "zone_append": false, 00:16:41.052 "compare": false, 00:16:41.052 "compare_and_write": false, 00:16:41.052 "abort": true, 00:16:41.052 "seek_hole": false, 00:16:41.052 "seek_data": false, 00:16:41.052 "copy": true, 00:16:41.052 "nvme_iov_md": false 00:16:41.052 }, 00:16:41.052 "memory_domains": [ 00:16:41.052 { 00:16:41.052 "dma_device_id": "system", 00:16:41.052 "dma_device_type": 1 00:16:41.052 }, 00:16:41.052 { 00:16:41.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.052 "dma_device_type": 2 00:16:41.052 } 00:16:41.052 ], 00:16:41.052 "driver_specific": {} 00:16:41.052 } 00:16:41.052 ] 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.052 [2024-11-15 10:45:02.134768] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.052 [2024-11-15 10:45:02.134836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.052 [2024-11-15 10:45:02.134869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.052 [2024-11-15 10:45:02.137363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:41.052 [2024-11-15 10:45:02.137434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.052 "name": "Existed_Raid", 00:16:41.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.052 "strip_size_kb": 64, 00:16:41.052 "state": "configuring", 00:16:41.052 "raid_level": "raid5f", 00:16:41.052 "superblock": false, 00:16:41.052 "num_base_bdevs": 4, 00:16:41.052 "num_base_bdevs_discovered": 3, 00:16:41.052 "num_base_bdevs_operational": 4, 00:16:41.052 "base_bdevs_list": [ 00:16:41.052 { 00:16:41.052 "name": "BaseBdev1", 00:16:41.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.052 "is_configured": false, 00:16:41.052 "data_offset": 0, 00:16:41.052 "data_size": 0 00:16:41.052 }, 00:16:41.052 { 00:16:41.052 "name": "BaseBdev2", 00:16:41.052 "uuid": "89c90f65-13e0-4a8a-915b-bf980243bc76", 00:16:41.052 "is_configured": true, 00:16:41.052 "data_offset": 0, 00:16:41.052 "data_size": 65536 00:16:41.052 }, 00:16:41.052 { 00:16:41.052 "name": "BaseBdev3", 00:16:41.052 "uuid": "90ed3979-fbb2-4ae9-a513-755bedf63704", 00:16:41.052 "is_configured": true, 00:16:41.052 "data_offset": 0, 00:16:41.052 "data_size": 65536 00:16:41.052 }, 00:16:41.052 { 00:16:41.052 "name": "BaseBdev4", 00:16:41.052 "uuid": "48aa2473-5e3a-4117-9787-dcc02b52cb65", 00:16:41.052 "is_configured": true, 00:16:41.052 "data_offset": 0, 00:16:41.052 "data_size": 65536 00:16:41.052 } 00:16:41.052 ] 00:16:41.052 }' 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.052 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.619 [2024-11-15 10:45:02.646958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.619 "name": "Existed_Raid", 00:16:41.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.619 "strip_size_kb": 64, 00:16:41.619 "state": "configuring", 00:16:41.619 "raid_level": "raid5f", 00:16:41.619 "superblock": false, 00:16:41.619 "num_base_bdevs": 4, 00:16:41.619 "num_base_bdevs_discovered": 2, 00:16:41.619 "num_base_bdevs_operational": 4, 00:16:41.619 "base_bdevs_list": [ 00:16:41.619 { 00:16:41.619 "name": "BaseBdev1", 00:16:41.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.619 "is_configured": false, 00:16:41.619 "data_offset": 0, 00:16:41.619 "data_size": 0 00:16:41.619 }, 00:16:41.619 { 00:16:41.619 "name": null, 00:16:41.619 "uuid": "89c90f65-13e0-4a8a-915b-bf980243bc76", 00:16:41.619 "is_configured": false, 00:16:41.619 "data_offset": 0, 00:16:41.619 "data_size": 65536 00:16:41.619 }, 00:16:41.619 { 00:16:41.619 "name": "BaseBdev3", 00:16:41.619 "uuid": "90ed3979-fbb2-4ae9-a513-755bedf63704", 00:16:41.619 "is_configured": true, 00:16:41.619 "data_offset": 0, 00:16:41.619 "data_size": 65536 00:16:41.619 }, 00:16:41.619 { 00:16:41.619 "name": "BaseBdev4", 00:16:41.619 "uuid": "48aa2473-5e3a-4117-9787-dcc02b52cb65", 00:16:41.619 "is_configured": true, 00:16:41.619 "data_offset": 0, 00:16:41.619 "data_size": 65536 00:16:41.619 } 00:16:41.619 ] 00:16:41.619 }' 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.619 10:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.186 [2024-11-15 10:45:03.247143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.186 BaseBdev1 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.186 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.186 [ 00:16:42.186 { 00:16:42.186 "name": "BaseBdev1", 00:16:42.186 "aliases": [ 00:16:42.186 "e59f434e-1d8a-4ecb-9e68-2b8c2ff5cdbe" 00:16:42.186 ], 00:16:42.186 "product_name": "Malloc disk", 00:16:42.186 "block_size": 512, 00:16:42.186 "num_blocks": 65536, 00:16:42.186 "uuid": "e59f434e-1d8a-4ecb-9e68-2b8c2ff5cdbe", 00:16:42.186 "assigned_rate_limits": { 00:16:42.186 "rw_ios_per_sec": 0, 00:16:42.186 "rw_mbytes_per_sec": 0, 00:16:42.186 "r_mbytes_per_sec": 0, 00:16:42.186 "w_mbytes_per_sec": 0 00:16:42.186 }, 00:16:42.186 "claimed": true, 00:16:42.186 "claim_type": "exclusive_write", 00:16:42.186 "zoned": false, 00:16:42.186 "supported_io_types": { 00:16:42.186 "read": true, 00:16:42.186 "write": true, 00:16:42.186 "unmap": true, 00:16:42.186 "flush": true, 00:16:42.186 "reset": true, 00:16:42.186 "nvme_admin": false, 00:16:42.186 "nvme_io": false, 00:16:42.186 "nvme_io_md": false, 00:16:42.186 "write_zeroes": true, 00:16:42.186 "zcopy": true, 00:16:42.186 "get_zone_info": false, 00:16:42.186 "zone_management": false, 00:16:42.186 "zone_append": false, 00:16:42.186 "compare": false, 00:16:42.187 "compare_and_write": false, 00:16:42.187 "abort": true, 00:16:42.187 "seek_hole": false, 00:16:42.187 "seek_data": false, 00:16:42.187 "copy": true, 00:16:42.187 "nvme_iov_md": false 00:16:42.187 }, 00:16:42.187 "memory_domains": [ 00:16:42.187 { 00:16:42.187 "dma_device_id": "system", 00:16:42.187 "dma_device_type": 1 00:16:42.187 }, 00:16:42.187 { 00:16:42.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.187 "dma_device_type": 2 00:16:42.187 } 00:16:42.187 ], 00:16:42.187 "driver_specific": {} 00:16:42.187 } 00:16:42.187 ] 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.187 "name": "Existed_Raid", 00:16:42.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.187 "strip_size_kb": 64, 00:16:42.187 "state": "configuring", 00:16:42.187 "raid_level": "raid5f", 00:16:42.187 "superblock": false, 00:16:42.187 "num_base_bdevs": 4, 00:16:42.187 "num_base_bdevs_discovered": 3, 00:16:42.187 "num_base_bdevs_operational": 4, 00:16:42.187 "base_bdevs_list": [ 00:16:42.187 { 00:16:42.187 "name": "BaseBdev1", 00:16:42.187 "uuid": "e59f434e-1d8a-4ecb-9e68-2b8c2ff5cdbe", 00:16:42.187 "is_configured": true, 00:16:42.187 "data_offset": 0, 00:16:42.187 "data_size": 65536 00:16:42.187 }, 00:16:42.187 { 00:16:42.187 "name": null, 00:16:42.187 "uuid": "89c90f65-13e0-4a8a-915b-bf980243bc76", 00:16:42.187 "is_configured": false, 00:16:42.187 "data_offset": 0, 00:16:42.187 "data_size": 65536 00:16:42.187 }, 00:16:42.187 { 00:16:42.187 "name": "BaseBdev3", 00:16:42.187 "uuid": "90ed3979-fbb2-4ae9-a513-755bedf63704", 00:16:42.187 "is_configured": true, 00:16:42.187 "data_offset": 0, 00:16:42.187 "data_size": 65536 00:16:42.187 }, 00:16:42.187 { 00:16:42.187 "name": "BaseBdev4", 00:16:42.187 "uuid": "48aa2473-5e3a-4117-9787-dcc02b52cb65", 00:16:42.187 "is_configured": true, 00:16:42.187 "data_offset": 0, 00:16:42.187 "data_size": 65536 00:16:42.187 } 00:16:42.187 ] 00:16:42.187 }' 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.187 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.754 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.754 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.754 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:42.754 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.754 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.754 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:42.754 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:42.754 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.754 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.754 [2024-11-15 10:45:03.871473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:42.754 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.754 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.754 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.754 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.755 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.755 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.755 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.755 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.755 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.755 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.755 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.755 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.755 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.755 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.755 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.755 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.013 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.013 "name": "Existed_Raid", 00:16:43.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.013 "strip_size_kb": 64, 00:16:43.013 "state": "configuring", 00:16:43.013 "raid_level": "raid5f", 00:16:43.013 "superblock": false, 00:16:43.013 "num_base_bdevs": 4, 00:16:43.013 "num_base_bdevs_discovered": 2, 00:16:43.013 "num_base_bdevs_operational": 4, 00:16:43.013 "base_bdevs_list": [ 00:16:43.013 { 00:16:43.013 "name": "BaseBdev1", 00:16:43.013 "uuid": "e59f434e-1d8a-4ecb-9e68-2b8c2ff5cdbe", 00:16:43.013 "is_configured": true, 00:16:43.013 "data_offset": 0, 00:16:43.013 "data_size": 65536 00:16:43.013 }, 00:16:43.013 { 00:16:43.013 "name": null, 00:16:43.013 "uuid": "89c90f65-13e0-4a8a-915b-bf980243bc76", 00:16:43.013 "is_configured": false, 00:16:43.013 "data_offset": 0, 00:16:43.013 "data_size": 65536 00:16:43.013 }, 00:16:43.013 { 00:16:43.013 "name": null, 00:16:43.013 "uuid": "90ed3979-fbb2-4ae9-a513-755bedf63704", 00:16:43.013 "is_configured": false, 00:16:43.013 "data_offset": 0, 00:16:43.013 "data_size": 65536 00:16:43.013 }, 00:16:43.013 { 00:16:43.013 "name": "BaseBdev4", 00:16:43.013 "uuid": "48aa2473-5e3a-4117-9787-dcc02b52cb65", 00:16:43.013 "is_configured": true, 00:16:43.013 "data_offset": 0, 00:16:43.013 "data_size": 65536 00:16:43.013 } 00:16:43.013 ] 00:16:43.013 }' 00:16:43.013 10:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.014 10:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.272 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.272 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.272 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:43.272 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.272 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.272 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:43.272 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:43.273 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.273 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.273 [2024-11-15 10:45:04.427636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.531 "name": "Existed_Raid", 00:16:43.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.531 "strip_size_kb": 64, 00:16:43.531 "state": "configuring", 00:16:43.531 "raid_level": "raid5f", 00:16:43.531 "superblock": false, 00:16:43.531 "num_base_bdevs": 4, 00:16:43.531 "num_base_bdevs_discovered": 3, 00:16:43.531 "num_base_bdevs_operational": 4, 00:16:43.531 "base_bdevs_list": [ 00:16:43.531 { 00:16:43.531 "name": "BaseBdev1", 00:16:43.531 "uuid": "e59f434e-1d8a-4ecb-9e68-2b8c2ff5cdbe", 00:16:43.531 "is_configured": true, 00:16:43.531 "data_offset": 0, 00:16:43.531 "data_size": 65536 00:16:43.531 }, 00:16:43.531 { 00:16:43.531 "name": null, 00:16:43.531 "uuid": "89c90f65-13e0-4a8a-915b-bf980243bc76", 00:16:43.531 "is_configured": false, 00:16:43.531 "data_offset": 0, 00:16:43.531 "data_size": 65536 00:16:43.531 }, 00:16:43.531 { 00:16:43.531 "name": "BaseBdev3", 00:16:43.531 "uuid": "90ed3979-fbb2-4ae9-a513-755bedf63704", 00:16:43.531 "is_configured": true, 00:16:43.531 "data_offset": 0, 00:16:43.531 "data_size": 65536 00:16:43.531 }, 00:16:43.531 { 00:16:43.531 "name": "BaseBdev4", 00:16:43.531 "uuid": "48aa2473-5e3a-4117-9787-dcc02b52cb65", 00:16:43.531 "is_configured": true, 00:16:43.531 "data_offset": 0, 00:16:43.531 "data_size": 65536 00:16:43.531 } 00:16:43.531 ] 00:16:43.531 }' 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.531 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.789 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.789 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.789 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.789 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:44.046 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.046 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:44.046 10:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:44.046 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.046 10:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.046 [2024-11-15 10:45:05.003831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:44.046 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.046 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:44.046 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.046 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.046 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.046 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.046 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.047 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.047 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.047 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.047 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.047 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.047 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.047 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.047 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.047 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.047 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.047 "name": "Existed_Raid", 00:16:44.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.047 "strip_size_kb": 64, 00:16:44.047 "state": "configuring", 00:16:44.047 "raid_level": "raid5f", 00:16:44.047 "superblock": false, 00:16:44.047 "num_base_bdevs": 4, 00:16:44.047 "num_base_bdevs_discovered": 2, 00:16:44.047 "num_base_bdevs_operational": 4, 00:16:44.047 "base_bdevs_list": [ 00:16:44.047 { 00:16:44.047 "name": null, 00:16:44.047 "uuid": "e59f434e-1d8a-4ecb-9e68-2b8c2ff5cdbe", 00:16:44.047 "is_configured": false, 00:16:44.047 "data_offset": 0, 00:16:44.047 "data_size": 65536 00:16:44.047 }, 00:16:44.047 { 00:16:44.047 "name": null, 00:16:44.047 "uuid": "89c90f65-13e0-4a8a-915b-bf980243bc76", 00:16:44.047 "is_configured": false, 00:16:44.047 "data_offset": 0, 00:16:44.047 "data_size": 65536 00:16:44.047 }, 00:16:44.047 { 00:16:44.047 "name": "BaseBdev3", 00:16:44.047 "uuid": "90ed3979-fbb2-4ae9-a513-755bedf63704", 00:16:44.047 "is_configured": true, 00:16:44.047 "data_offset": 0, 00:16:44.047 "data_size": 65536 00:16:44.047 }, 00:16:44.047 { 00:16:44.047 "name": "BaseBdev4", 00:16:44.047 "uuid": "48aa2473-5e3a-4117-9787-dcc02b52cb65", 00:16:44.047 "is_configured": true, 00:16:44.047 "data_offset": 0, 00:16:44.047 "data_size": 65536 00:16:44.047 } 00:16:44.047 ] 00:16:44.047 }' 00:16:44.047 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.047 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.615 [2024-11-15 10:45:05.651998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.615 "name": "Existed_Raid", 00:16:44.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.615 "strip_size_kb": 64, 00:16:44.615 "state": "configuring", 00:16:44.615 "raid_level": "raid5f", 00:16:44.615 "superblock": false, 00:16:44.615 "num_base_bdevs": 4, 00:16:44.615 "num_base_bdevs_discovered": 3, 00:16:44.615 "num_base_bdevs_operational": 4, 00:16:44.615 "base_bdevs_list": [ 00:16:44.615 { 00:16:44.615 "name": null, 00:16:44.615 "uuid": "e59f434e-1d8a-4ecb-9e68-2b8c2ff5cdbe", 00:16:44.615 "is_configured": false, 00:16:44.615 "data_offset": 0, 00:16:44.615 "data_size": 65536 00:16:44.615 }, 00:16:44.615 { 00:16:44.615 "name": "BaseBdev2", 00:16:44.615 "uuid": "89c90f65-13e0-4a8a-915b-bf980243bc76", 00:16:44.615 "is_configured": true, 00:16:44.615 "data_offset": 0, 00:16:44.615 "data_size": 65536 00:16:44.615 }, 00:16:44.615 { 00:16:44.615 "name": "BaseBdev3", 00:16:44.615 "uuid": "90ed3979-fbb2-4ae9-a513-755bedf63704", 00:16:44.615 "is_configured": true, 00:16:44.615 "data_offset": 0, 00:16:44.615 "data_size": 65536 00:16:44.615 }, 00:16:44.615 { 00:16:44.615 "name": "BaseBdev4", 00:16:44.615 "uuid": "48aa2473-5e3a-4117-9787-dcc02b52cb65", 00:16:44.615 "is_configured": true, 00:16:44.615 "data_offset": 0, 00:16:44.615 "data_size": 65536 00:16:44.615 } 00:16:44.615 ] 00:16:44.615 }' 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.615 10:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.182 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.182 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:45.182 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.182 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.182 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.182 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:45.182 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.182 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:45.182 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.182 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.182 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.182 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e59f434e-1d8a-4ecb-9e68-2b8c2ff5cdbe 00:16:45.182 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.182 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.182 [2024-11-15 10:45:06.334466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:45.182 [2024-11-15 10:45:06.334594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:45.182 [2024-11-15 10:45:06.334611] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:45.182 [2024-11-15 10:45:06.334988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:45.441 [2024-11-15 10:45:06.341556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:45.441 [2024-11-15 10:45:06.341587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:45.441 [2024-11-15 10:45:06.341956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.441 NewBaseBdev 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.441 [ 00:16:45.441 { 00:16:45.441 "name": "NewBaseBdev", 00:16:45.441 "aliases": [ 00:16:45.441 "e59f434e-1d8a-4ecb-9e68-2b8c2ff5cdbe" 00:16:45.441 ], 00:16:45.441 "product_name": "Malloc disk", 00:16:45.441 "block_size": 512, 00:16:45.441 "num_blocks": 65536, 00:16:45.441 "uuid": "e59f434e-1d8a-4ecb-9e68-2b8c2ff5cdbe", 00:16:45.441 "assigned_rate_limits": { 00:16:45.441 "rw_ios_per_sec": 0, 00:16:45.441 "rw_mbytes_per_sec": 0, 00:16:45.441 "r_mbytes_per_sec": 0, 00:16:45.441 "w_mbytes_per_sec": 0 00:16:45.441 }, 00:16:45.441 "claimed": true, 00:16:45.441 "claim_type": "exclusive_write", 00:16:45.441 "zoned": false, 00:16:45.441 "supported_io_types": { 00:16:45.441 "read": true, 00:16:45.441 "write": true, 00:16:45.441 "unmap": true, 00:16:45.441 "flush": true, 00:16:45.441 "reset": true, 00:16:45.441 "nvme_admin": false, 00:16:45.441 "nvme_io": false, 00:16:45.441 "nvme_io_md": false, 00:16:45.441 "write_zeroes": true, 00:16:45.441 "zcopy": true, 00:16:45.441 "get_zone_info": false, 00:16:45.441 "zone_management": false, 00:16:45.441 "zone_append": false, 00:16:45.441 "compare": false, 00:16:45.441 "compare_and_write": false, 00:16:45.441 "abort": true, 00:16:45.441 "seek_hole": false, 00:16:45.441 "seek_data": false, 00:16:45.441 "copy": true, 00:16:45.441 "nvme_iov_md": false 00:16:45.441 }, 00:16:45.441 "memory_domains": [ 00:16:45.441 { 00:16:45.441 "dma_device_id": "system", 00:16:45.441 "dma_device_type": 1 00:16:45.441 }, 00:16:45.441 { 00:16:45.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.441 "dma_device_type": 2 00:16:45.441 } 00:16:45.441 ], 00:16:45.441 "driver_specific": {} 00:16:45.441 } 00:16:45.441 ] 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.441 "name": "Existed_Raid", 00:16:45.441 "uuid": "adc66e23-abd7-477d-9b68-3dc66e7358b9", 00:16:45.441 "strip_size_kb": 64, 00:16:45.441 "state": "online", 00:16:45.441 "raid_level": "raid5f", 00:16:45.441 "superblock": false, 00:16:45.441 "num_base_bdevs": 4, 00:16:45.441 "num_base_bdevs_discovered": 4, 00:16:45.441 "num_base_bdevs_operational": 4, 00:16:45.441 "base_bdevs_list": [ 00:16:45.441 { 00:16:45.441 "name": "NewBaseBdev", 00:16:45.441 "uuid": "e59f434e-1d8a-4ecb-9e68-2b8c2ff5cdbe", 00:16:45.441 "is_configured": true, 00:16:45.441 "data_offset": 0, 00:16:45.441 "data_size": 65536 00:16:45.441 }, 00:16:45.441 { 00:16:45.441 "name": "BaseBdev2", 00:16:45.441 "uuid": "89c90f65-13e0-4a8a-915b-bf980243bc76", 00:16:45.441 "is_configured": true, 00:16:45.441 "data_offset": 0, 00:16:45.441 "data_size": 65536 00:16:45.441 }, 00:16:45.441 { 00:16:45.441 "name": "BaseBdev3", 00:16:45.441 "uuid": "90ed3979-fbb2-4ae9-a513-755bedf63704", 00:16:45.441 "is_configured": true, 00:16:45.441 "data_offset": 0, 00:16:45.441 "data_size": 65536 00:16:45.441 }, 00:16:45.441 { 00:16:45.441 "name": "BaseBdev4", 00:16:45.441 "uuid": "48aa2473-5e3a-4117-9787-dcc02b52cb65", 00:16:45.441 "is_configured": true, 00:16:45.441 "data_offset": 0, 00:16:45.441 "data_size": 65536 00:16:45.441 } 00:16:45.441 ] 00:16:45.441 }' 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.441 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.007 [2024-11-15 10:45:06.897678] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:46.007 "name": "Existed_Raid", 00:16:46.007 "aliases": [ 00:16:46.007 "adc66e23-abd7-477d-9b68-3dc66e7358b9" 00:16:46.007 ], 00:16:46.007 "product_name": "Raid Volume", 00:16:46.007 "block_size": 512, 00:16:46.007 "num_blocks": 196608, 00:16:46.007 "uuid": "adc66e23-abd7-477d-9b68-3dc66e7358b9", 00:16:46.007 "assigned_rate_limits": { 00:16:46.007 "rw_ios_per_sec": 0, 00:16:46.007 "rw_mbytes_per_sec": 0, 00:16:46.007 "r_mbytes_per_sec": 0, 00:16:46.007 "w_mbytes_per_sec": 0 00:16:46.007 }, 00:16:46.007 "claimed": false, 00:16:46.007 "zoned": false, 00:16:46.007 "supported_io_types": { 00:16:46.007 "read": true, 00:16:46.007 "write": true, 00:16:46.007 "unmap": false, 00:16:46.007 "flush": false, 00:16:46.007 "reset": true, 00:16:46.007 "nvme_admin": false, 00:16:46.007 "nvme_io": false, 00:16:46.007 "nvme_io_md": false, 00:16:46.007 "write_zeroes": true, 00:16:46.007 "zcopy": false, 00:16:46.007 "get_zone_info": false, 00:16:46.007 "zone_management": false, 00:16:46.007 "zone_append": false, 00:16:46.007 "compare": false, 00:16:46.007 "compare_and_write": false, 00:16:46.007 "abort": false, 00:16:46.007 "seek_hole": false, 00:16:46.007 "seek_data": false, 00:16:46.007 "copy": false, 00:16:46.007 "nvme_iov_md": false 00:16:46.007 }, 00:16:46.007 "driver_specific": { 00:16:46.007 "raid": { 00:16:46.007 "uuid": "adc66e23-abd7-477d-9b68-3dc66e7358b9", 00:16:46.007 "strip_size_kb": 64, 00:16:46.007 "state": "online", 00:16:46.007 "raid_level": "raid5f", 00:16:46.007 "superblock": false, 00:16:46.007 "num_base_bdevs": 4, 00:16:46.007 "num_base_bdevs_discovered": 4, 00:16:46.007 "num_base_bdevs_operational": 4, 00:16:46.007 "base_bdevs_list": [ 00:16:46.007 { 00:16:46.007 "name": "NewBaseBdev", 00:16:46.007 "uuid": "e59f434e-1d8a-4ecb-9e68-2b8c2ff5cdbe", 00:16:46.007 "is_configured": true, 00:16:46.007 "data_offset": 0, 00:16:46.007 "data_size": 65536 00:16:46.007 }, 00:16:46.007 { 00:16:46.007 "name": "BaseBdev2", 00:16:46.007 "uuid": "89c90f65-13e0-4a8a-915b-bf980243bc76", 00:16:46.007 "is_configured": true, 00:16:46.007 "data_offset": 0, 00:16:46.007 "data_size": 65536 00:16:46.007 }, 00:16:46.007 { 00:16:46.007 "name": "BaseBdev3", 00:16:46.007 "uuid": "90ed3979-fbb2-4ae9-a513-755bedf63704", 00:16:46.007 "is_configured": true, 00:16:46.007 "data_offset": 0, 00:16:46.007 "data_size": 65536 00:16:46.007 }, 00:16:46.007 { 00:16:46.007 "name": "BaseBdev4", 00:16:46.007 "uuid": "48aa2473-5e3a-4117-9787-dcc02b52cb65", 00:16:46.007 "is_configured": true, 00:16:46.007 "data_offset": 0, 00:16:46.007 "data_size": 65536 00:16:46.007 } 00:16:46.007 ] 00:16:46.007 } 00:16:46.007 } 00:16:46.007 }' 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:46.007 BaseBdev2 00:16:46.007 BaseBdev3 00:16:46.007 BaseBdev4' 00:16:46.007 10:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.007 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.267 [2024-11-15 10:45:07.269427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:46.267 [2024-11-15 10:45:07.269480] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.267 [2024-11-15 10:45:07.269632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.267 [2024-11-15 10:45:07.270030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.267 [2024-11-15 10:45:07.270066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83114 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83114 ']' 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83114 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83114 00:16:46.267 killing process with pid 83114 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83114' 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83114 00:16:46.267 [2024-11-15 10:45:07.309435] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.267 10:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83114 00:16:46.525 [2024-11-15 10:45:07.662735] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:47.902 00:16:47.902 real 0m12.788s 00:16:47.902 user 0m21.213s 00:16:47.902 sys 0m1.746s 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.902 ************************************ 00:16:47.902 END TEST raid5f_state_function_test 00:16:47.902 ************************************ 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.902 10:45:08 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:47.902 10:45:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:47.902 10:45:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.902 10:45:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.902 ************************************ 00:16:47.902 START TEST raid5f_state_function_test_sb 00:16:47.902 ************************************ 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:47.902 Process raid pid: 83796 00:16:47.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83796 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83796' 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83796 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83796 ']' 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.902 10:45:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.902 [2024-11-15 10:45:08.893983] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:16:47.902 [2024-11-15 10:45:08.895137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.160 [2024-11-15 10:45:09.083804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.160 [2024-11-15 10:45:09.224208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.417 [2024-11-15 10:45:09.437987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.417 [2024-11-15 10:45:09.438279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.984 [2024-11-15 10:45:09.881931] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:48.984 [2024-11-15 10:45:09.882133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:48.984 [2024-11-15 10:45:09.882288] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.984 [2024-11-15 10:45:09.882486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.984 [2024-11-15 10:45:09.882662] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:48.984 [2024-11-15 10:45:09.882831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:48.984 [2024-11-15 10:45:09.882996] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:48.984 [2024-11-15 10:45:09.883167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.984 "name": "Existed_Raid", 00:16:48.984 "uuid": "544d670b-1afd-42b9-a78d-bce22511c8e3", 00:16:48.984 "strip_size_kb": 64, 00:16:48.984 "state": "configuring", 00:16:48.984 "raid_level": "raid5f", 00:16:48.984 "superblock": true, 00:16:48.984 "num_base_bdevs": 4, 00:16:48.984 "num_base_bdevs_discovered": 0, 00:16:48.984 "num_base_bdevs_operational": 4, 00:16:48.984 "base_bdevs_list": [ 00:16:48.984 { 00:16:48.984 "name": "BaseBdev1", 00:16:48.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.984 "is_configured": false, 00:16:48.984 "data_offset": 0, 00:16:48.984 "data_size": 0 00:16:48.984 }, 00:16:48.984 { 00:16:48.984 "name": "BaseBdev2", 00:16:48.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.984 "is_configured": false, 00:16:48.984 "data_offset": 0, 00:16:48.984 "data_size": 0 00:16:48.984 }, 00:16:48.984 { 00:16:48.984 "name": "BaseBdev3", 00:16:48.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.984 "is_configured": false, 00:16:48.984 "data_offset": 0, 00:16:48.984 "data_size": 0 00:16:48.984 }, 00:16:48.984 { 00:16:48.984 "name": "BaseBdev4", 00:16:48.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.984 "is_configured": false, 00:16:48.984 "data_offset": 0, 00:16:48.984 "data_size": 0 00:16:48.984 } 00:16:48.984 ] 00:16:48.984 }' 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.984 10:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.549 [2024-11-15 10:45:10.414069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.549 [2024-11-15 10:45:10.414115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.549 [2024-11-15 10:45:10.422018] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.549 [2024-11-15 10:45:10.422250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.549 [2024-11-15 10:45:10.422410] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.549 [2024-11-15 10:45:10.422583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.549 [2024-11-15 10:45:10.422723] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:49.549 [2024-11-15 10:45:10.422818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:49.549 [2024-11-15 10:45:10.422993] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:49.549 [2024-11-15 10:45:10.423199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.549 [2024-11-15 10:45:10.471800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.549 BaseBdev1 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.549 [ 00:16:49.549 { 00:16:49.549 "name": "BaseBdev1", 00:16:49.549 "aliases": [ 00:16:49.549 "f27eb7c6-8440-4ad0-90d8-5f46f5515376" 00:16:49.549 ], 00:16:49.549 "product_name": "Malloc disk", 00:16:49.549 "block_size": 512, 00:16:49.549 "num_blocks": 65536, 00:16:49.549 "uuid": "f27eb7c6-8440-4ad0-90d8-5f46f5515376", 00:16:49.549 "assigned_rate_limits": { 00:16:49.549 "rw_ios_per_sec": 0, 00:16:49.549 "rw_mbytes_per_sec": 0, 00:16:49.549 "r_mbytes_per_sec": 0, 00:16:49.549 "w_mbytes_per_sec": 0 00:16:49.549 }, 00:16:49.549 "claimed": true, 00:16:49.549 "claim_type": "exclusive_write", 00:16:49.549 "zoned": false, 00:16:49.549 "supported_io_types": { 00:16:49.549 "read": true, 00:16:49.549 "write": true, 00:16:49.549 "unmap": true, 00:16:49.549 "flush": true, 00:16:49.549 "reset": true, 00:16:49.549 "nvme_admin": false, 00:16:49.549 "nvme_io": false, 00:16:49.549 "nvme_io_md": false, 00:16:49.549 "write_zeroes": true, 00:16:49.549 "zcopy": true, 00:16:49.549 "get_zone_info": false, 00:16:49.549 "zone_management": false, 00:16:49.549 "zone_append": false, 00:16:49.549 "compare": false, 00:16:49.549 "compare_and_write": false, 00:16:49.549 "abort": true, 00:16:49.549 "seek_hole": false, 00:16:49.549 "seek_data": false, 00:16:49.549 "copy": true, 00:16:49.549 "nvme_iov_md": false 00:16:49.549 }, 00:16:49.549 "memory_domains": [ 00:16:49.549 { 00:16:49.549 "dma_device_id": "system", 00:16:49.549 "dma_device_type": 1 00:16:49.549 }, 00:16:49.549 { 00:16:49.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.549 "dma_device_type": 2 00:16:49.549 } 00:16:49.549 ], 00:16:49.549 "driver_specific": {} 00:16:49.549 } 00:16:49.549 ] 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.549 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.549 "name": "Existed_Raid", 00:16:49.549 "uuid": "de62cebe-98fc-42e0-a96c-87dc30897b2b", 00:16:49.549 "strip_size_kb": 64, 00:16:49.549 "state": "configuring", 00:16:49.549 "raid_level": "raid5f", 00:16:49.549 "superblock": true, 00:16:49.549 "num_base_bdevs": 4, 00:16:49.549 "num_base_bdevs_discovered": 1, 00:16:49.549 "num_base_bdevs_operational": 4, 00:16:49.549 "base_bdevs_list": [ 00:16:49.549 { 00:16:49.550 "name": "BaseBdev1", 00:16:49.550 "uuid": "f27eb7c6-8440-4ad0-90d8-5f46f5515376", 00:16:49.550 "is_configured": true, 00:16:49.550 "data_offset": 2048, 00:16:49.550 "data_size": 63488 00:16:49.550 }, 00:16:49.550 { 00:16:49.550 "name": "BaseBdev2", 00:16:49.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.550 "is_configured": false, 00:16:49.550 "data_offset": 0, 00:16:49.550 "data_size": 0 00:16:49.550 }, 00:16:49.550 { 00:16:49.550 "name": "BaseBdev3", 00:16:49.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.550 "is_configured": false, 00:16:49.550 "data_offset": 0, 00:16:49.550 "data_size": 0 00:16:49.550 }, 00:16:49.550 { 00:16:49.550 "name": "BaseBdev4", 00:16:49.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.550 "is_configured": false, 00:16:49.550 "data_offset": 0, 00:16:49.550 "data_size": 0 00:16:49.550 } 00:16:49.550 ] 00:16:49.550 }' 00:16:49.550 10:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.550 10:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.116 [2024-11-15 10:45:11.048103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.116 [2024-11-15 10:45:11.048305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.116 [2024-11-15 10:45:11.056132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.116 [2024-11-15 10:45:11.058752] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.116 [2024-11-15 10:45:11.058936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.116 [2024-11-15 10:45:11.059118] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:50.116 [2024-11-15 10:45:11.059316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:50.116 [2024-11-15 10:45:11.059462] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:50.116 [2024-11-15 10:45:11.059653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.116 "name": "Existed_Raid", 00:16:50.116 "uuid": "7412ba85-1cc5-47bf-86eb-871a3aa0dcd9", 00:16:50.116 "strip_size_kb": 64, 00:16:50.116 "state": "configuring", 00:16:50.116 "raid_level": "raid5f", 00:16:50.116 "superblock": true, 00:16:50.116 "num_base_bdevs": 4, 00:16:50.116 "num_base_bdevs_discovered": 1, 00:16:50.116 "num_base_bdevs_operational": 4, 00:16:50.116 "base_bdevs_list": [ 00:16:50.116 { 00:16:50.116 "name": "BaseBdev1", 00:16:50.116 "uuid": "f27eb7c6-8440-4ad0-90d8-5f46f5515376", 00:16:50.116 "is_configured": true, 00:16:50.116 "data_offset": 2048, 00:16:50.116 "data_size": 63488 00:16:50.116 }, 00:16:50.116 { 00:16:50.116 "name": "BaseBdev2", 00:16:50.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.116 "is_configured": false, 00:16:50.116 "data_offset": 0, 00:16:50.116 "data_size": 0 00:16:50.116 }, 00:16:50.116 { 00:16:50.116 "name": "BaseBdev3", 00:16:50.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.116 "is_configured": false, 00:16:50.116 "data_offset": 0, 00:16:50.116 "data_size": 0 00:16:50.116 }, 00:16:50.116 { 00:16:50.116 "name": "BaseBdev4", 00:16:50.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.116 "is_configured": false, 00:16:50.116 "data_offset": 0, 00:16:50.116 "data_size": 0 00:16:50.116 } 00:16:50.116 ] 00:16:50.116 }' 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.116 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.685 [2024-11-15 10:45:11.623645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.685 BaseBdev2 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.685 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.685 [ 00:16:50.685 { 00:16:50.685 "name": "BaseBdev2", 00:16:50.685 "aliases": [ 00:16:50.685 "3373f524-54a2-4db2-8903-aef3dfaa5d08" 00:16:50.685 ], 00:16:50.685 "product_name": "Malloc disk", 00:16:50.685 "block_size": 512, 00:16:50.685 "num_blocks": 65536, 00:16:50.685 "uuid": "3373f524-54a2-4db2-8903-aef3dfaa5d08", 00:16:50.685 "assigned_rate_limits": { 00:16:50.685 "rw_ios_per_sec": 0, 00:16:50.685 "rw_mbytes_per_sec": 0, 00:16:50.685 "r_mbytes_per_sec": 0, 00:16:50.685 "w_mbytes_per_sec": 0 00:16:50.685 }, 00:16:50.685 "claimed": true, 00:16:50.685 "claim_type": "exclusive_write", 00:16:50.686 "zoned": false, 00:16:50.686 "supported_io_types": { 00:16:50.686 "read": true, 00:16:50.686 "write": true, 00:16:50.686 "unmap": true, 00:16:50.686 "flush": true, 00:16:50.686 "reset": true, 00:16:50.686 "nvme_admin": false, 00:16:50.686 "nvme_io": false, 00:16:50.686 "nvme_io_md": false, 00:16:50.686 "write_zeroes": true, 00:16:50.686 "zcopy": true, 00:16:50.686 "get_zone_info": false, 00:16:50.686 "zone_management": false, 00:16:50.686 "zone_append": false, 00:16:50.686 "compare": false, 00:16:50.686 "compare_and_write": false, 00:16:50.686 "abort": true, 00:16:50.686 "seek_hole": false, 00:16:50.686 "seek_data": false, 00:16:50.686 "copy": true, 00:16:50.686 "nvme_iov_md": false 00:16:50.686 }, 00:16:50.686 "memory_domains": [ 00:16:50.686 { 00:16:50.686 "dma_device_id": "system", 00:16:50.686 "dma_device_type": 1 00:16:50.686 }, 00:16:50.686 { 00:16:50.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.686 "dma_device_type": 2 00:16:50.686 } 00:16:50.686 ], 00:16:50.686 "driver_specific": {} 00:16:50.686 } 00:16:50.686 ] 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.686 "name": "Existed_Raid", 00:16:50.686 "uuid": "7412ba85-1cc5-47bf-86eb-871a3aa0dcd9", 00:16:50.686 "strip_size_kb": 64, 00:16:50.686 "state": "configuring", 00:16:50.686 "raid_level": "raid5f", 00:16:50.686 "superblock": true, 00:16:50.686 "num_base_bdevs": 4, 00:16:50.686 "num_base_bdevs_discovered": 2, 00:16:50.686 "num_base_bdevs_operational": 4, 00:16:50.686 "base_bdevs_list": [ 00:16:50.686 { 00:16:50.686 "name": "BaseBdev1", 00:16:50.686 "uuid": "f27eb7c6-8440-4ad0-90d8-5f46f5515376", 00:16:50.686 "is_configured": true, 00:16:50.686 "data_offset": 2048, 00:16:50.686 "data_size": 63488 00:16:50.686 }, 00:16:50.686 { 00:16:50.686 "name": "BaseBdev2", 00:16:50.686 "uuid": "3373f524-54a2-4db2-8903-aef3dfaa5d08", 00:16:50.686 "is_configured": true, 00:16:50.686 "data_offset": 2048, 00:16:50.686 "data_size": 63488 00:16:50.686 }, 00:16:50.686 { 00:16:50.686 "name": "BaseBdev3", 00:16:50.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.686 "is_configured": false, 00:16:50.686 "data_offset": 0, 00:16:50.686 "data_size": 0 00:16:50.686 }, 00:16:50.686 { 00:16:50.686 "name": "BaseBdev4", 00:16:50.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.686 "is_configured": false, 00:16:50.686 "data_offset": 0, 00:16:50.686 "data_size": 0 00:16:50.686 } 00:16:50.686 ] 00:16:50.686 }' 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.686 10:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.253 [2024-11-15 10:45:12.240109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:51.253 BaseBdev3 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.253 [ 00:16:51.253 { 00:16:51.253 "name": "BaseBdev3", 00:16:51.253 "aliases": [ 00:16:51.253 "89d0ef29-8f6f-4455-8862-4b13819744df" 00:16:51.253 ], 00:16:51.253 "product_name": "Malloc disk", 00:16:51.253 "block_size": 512, 00:16:51.253 "num_blocks": 65536, 00:16:51.253 "uuid": "89d0ef29-8f6f-4455-8862-4b13819744df", 00:16:51.253 "assigned_rate_limits": { 00:16:51.253 "rw_ios_per_sec": 0, 00:16:51.253 "rw_mbytes_per_sec": 0, 00:16:51.253 "r_mbytes_per_sec": 0, 00:16:51.253 "w_mbytes_per_sec": 0 00:16:51.253 }, 00:16:51.253 "claimed": true, 00:16:51.253 "claim_type": "exclusive_write", 00:16:51.253 "zoned": false, 00:16:51.253 "supported_io_types": { 00:16:51.253 "read": true, 00:16:51.253 "write": true, 00:16:51.253 "unmap": true, 00:16:51.253 "flush": true, 00:16:51.253 "reset": true, 00:16:51.253 "nvme_admin": false, 00:16:51.253 "nvme_io": false, 00:16:51.253 "nvme_io_md": false, 00:16:51.253 "write_zeroes": true, 00:16:51.253 "zcopy": true, 00:16:51.253 "get_zone_info": false, 00:16:51.253 "zone_management": false, 00:16:51.253 "zone_append": false, 00:16:51.253 "compare": false, 00:16:51.253 "compare_and_write": false, 00:16:51.253 "abort": true, 00:16:51.253 "seek_hole": false, 00:16:51.253 "seek_data": false, 00:16:51.253 "copy": true, 00:16:51.253 "nvme_iov_md": false 00:16:51.253 }, 00:16:51.253 "memory_domains": [ 00:16:51.253 { 00:16:51.253 "dma_device_id": "system", 00:16:51.253 "dma_device_type": 1 00:16:51.253 }, 00:16:51.253 { 00:16:51.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.253 "dma_device_type": 2 00:16:51.253 } 00:16:51.253 ], 00:16:51.253 "driver_specific": {} 00:16:51.253 } 00:16:51.253 ] 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.253 "name": "Existed_Raid", 00:16:51.253 "uuid": "7412ba85-1cc5-47bf-86eb-871a3aa0dcd9", 00:16:51.253 "strip_size_kb": 64, 00:16:51.253 "state": "configuring", 00:16:51.253 "raid_level": "raid5f", 00:16:51.253 "superblock": true, 00:16:51.253 "num_base_bdevs": 4, 00:16:51.253 "num_base_bdevs_discovered": 3, 00:16:51.253 "num_base_bdevs_operational": 4, 00:16:51.253 "base_bdevs_list": [ 00:16:51.253 { 00:16:51.253 "name": "BaseBdev1", 00:16:51.253 "uuid": "f27eb7c6-8440-4ad0-90d8-5f46f5515376", 00:16:51.253 "is_configured": true, 00:16:51.253 "data_offset": 2048, 00:16:51.253 "data_size": 63488 00:16:51.253 }, 00:16:51.253 { 00:16:51.253 "name": "BaseBdev2", 00:16:51.253 "uuid": "3373f524-54a2-4db2-8903-aef3dfaa5d08", 00:16:51.253 "is_configured": true, 00:16:51.253 "data_offset": 2048, 00:16:51.253 "data_size": 63488 00:16:51.253 }, 00:16:51.253 { 00:16:51.253 "name": "BaseBdev3", 00:16:51.253 "uuid": "89d0ef29-8f6f-4455-8862-4b13819744df", 00:16:51.253 "is_configured": true, 00:16:51.253 "data_offset": 2048, 00:16:51.253 "data_size": 63488 00:16:51.253 }, 00:16:51.253 { 00:16:51.253 "name": "BaseBdev4", 00:16:51.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.253 "is_configured": false, 00:16:51.253 "data_offset": 0, 00:16:51.253 "data_size": 0 00:16:51.253 } 00:16:51.253 ] 00:16:51.253 }' 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.253 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.820 [2024-11-15 10:45:12.807153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:51.820 BaseBdev4 00:16:51.820 [2024-11-15 10:45:12.807786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:51.820 [2024-11-15 10:45:12.807812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:51.820 [2024-11-15 10:45:12.808146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.820 [2024-11-15 10:45:12.815628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:51.820 [2024-11-15 10:45:12.815661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:51.820 [2024-11-15 10:45:12.815979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.820 [ 00:16:51.820 { 00:16:51.820 "name": "BaseBdev4", 00:16:51.820 "aliases": [ 00:16:51.820 "2f3753a3-f757-46d6-bdf8-35440b4cf60e" 00:16:51.820 ], 00:16:51.820 "product_name": "Malloc disk", 00:16:51.820 "block_size": 512, 00:16:51.820 "num_blocks": 65536, 00:16:51.820 "uuid": "2f3753a3-f757-46d6-bdf8-35440b4cf60e", 00:16:51.820 "assigned_rate_limits": { 00:16:51.820 "rw_ios_per_sec": 0, 00:16:51.820 "rw_mbytes_per_sec": 0, 00:16:51.820 "r_mbytes_per_sec": 0, 00:16:51.820 "w_mbytes_per_sec": 0 00:16:51.820 }, 00:16:51.820 "claimed": true, 00:16:51.820 "claim_type": "exclusive_write", 00:16:51.820 "zoned": false, 00:16:51.820 "supported_io_types": { 00:16:51.820 "read": true, 00:16:51.820 "write": true, 00:16:51.820 "unmap": true, 00:16:51.820 "flush": true, 00:16:51.820 "reset": true, 00:16:51.820 "nvme_admin": false, 00:16:51.820 "nvme_io": false, 00:16:51.820 "nvme_io_md": false, 00:16:51.820 "write_zeroes": true, 00:16:51.820 "zcopy": true, 00:16:51.820 "get_zone_info": false, 00:16:51.820 "zone_management": false, 00:16:51.820 "zone_append": false, 00:16:51.820 "compare": false, 00:16:51.820 "compare_and_write": false, 00:16:51.820 "abort": true, 00:16:51.820 "seek_hole": false, 00:16:51.820 "seek_data": false, 00:16:51.820 "copy": true, 00:16:51.820 "nvme_iov_md": false 00:16:51.820 }, 00:16:51.820 "memory_domains": [ 00:16:51.820 { 00:16:51.820 "dma_device_id": "system", 00:16:51.820 "dma_device_type": 1 00:16:51.820 }, 00:16:51.820 { 00:16:51.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.820 "dma_device_type": 2 00:16:51.820 } 00:16:51.820 ], 00:16:51.820 "driver_specific": {} 00:16:51.820 } 00:16:51.820 ] 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.820 "name": "Existed_Raid", 00:16:51.820 "uuid": "7412ba85-1cc5-47bf-86eb-871a3aa0dcd9", 00:16:51.820 "strip_size_kb": 64, 00:16:51.820 "state": "online", 00:16:51.820 "raid_level": "raid5f", 00:16:51.820 "superblock": true, 00:16:51.820 "num_base_bdevs": 4, 00:16:51.820 "num_base_bdevs_discovered": 4, 00:16:51.820 "num_base_bdevs_operational": 4, 00:16:51.820 "base_bdevs_list": [ 00:16:51.820 { 00:16:51.820 "name": "BaseBdev1", 00:16:51.820 "uuid": "f27eb7c6-8440-4ad0-90d8-5f46f5515376", 00:16:51.820 "is_configured": true, 00:16:51.820 "data_offset": 2048, 00:16:51.820 "data_size": 63488 00:16:51.820 }, 00:16:51.820 { 00:16:51.820 "name": "BaseBdev2", 00:16:51.820 "uuid": "3373f524-54a2-4db2-8903-aef3dfaa5d08", 00:16:51.820 "is_configured": true, 00:16:51.820 "data_offset": 2048, 00:16:51.820 "data_size": 63488 00:16:51.820 }, 00:16:51.820 { 00:16:51.820 "name": "BaseBdev3", 00:16:51.820 "uuid": "89d0ef29-8f6f-4455-8862-4b13819744df", 00:16:51.820 "is_configured": true, 00:16:51.820 "data_offset": 2048, 00:16:51.820 "data_size": 63488 00:16:51.820 }, 00:16:51.820 { 00:16:51.820 "name": "BaseBdev4", 00:16:51.820 "uuid": "2f3753a3-f757-46d6-bdf8-35440b4cf60e", 00:16:51.820 "is_configured": true, 00:16:51.820 "data_offset": 2048, 00:16:51.820 "data_size": 63488 00:16:51.820 } 00:16:51.820 ] 00:16:51.820 }' 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.820 10:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.386 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:52.386 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:52.386 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:52.386 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:52.386 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:52.386 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:52.386 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:52.386 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:52.386 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.386 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.386 [2024-11-15 10:45:13.396218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.386 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.386 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:52.386 "name": "Existed_Raid", 00:16:52.386 "aliases": [ 00:16:52.386 "7412ba85-1cc5-47bf-86eb-871a3aa0dcd9" 00:16:52.386 ], 00:16:52.386 "product_name": "Raid Volume", 00:16:52.386 "block_size": 512, 00:16:52.386 "num_blocks": 190464, 00:16:52.386 "uuid": "7412ba85-1cc5-47bf-86eb-871a3aa0dcd9", 00:16:52.386 "assigned_rate_limits": { 00:16:52.386 "rw_ios_per_sec": 0, 00:16:52.386 "rw_mbytes_per_sec": 0, 00:16:52.386 "r_mbytes_per_sec": 0, 00:16:52.386 "w_mbytes_per_sec": 0 00:16:52.386 }, 00:16:52.386 "claimed": false, 00:16:52.386 "zoned": false, 00:16:52.386 "supported_io_types": { 00:16:52.386 "read": true, 00:16:52.386 "write": true, 00:16:52.386 "unmap": false, 00:16:52.386 "flush": false, 00:16:52.386 "reset": true, 00:16:52.386 "nvme_admin": false, 00:16:52.386 "nvme_io": false, 00:16:52.386 "nvme_io_md": false, 00:16:52.386 "write_zeroes": true, 00:16:52.386 "zcopy": false, 00:16:52.387 "get_zone_info": false, 00:16:52.387 "zone_management": false, 00:16:52.387 "zone_append": false, 00:16:52.387 "compare": false, 00:16:52.387 "compare_and_write": false, 00:16:52.387 "abort": false, 00:16:52.387 "seek_hole": false, 00:16:52.387 "seek_data": false, 00:16:52.387 "copy": false, 00:16:52.387 "nvme_iov_md": false 00:16:52.387 }, 00:16:52.387 "driver_specific": { 00:16:52.387 "raid": { 00:16:52.387 "uuid": "7412ba85-1cc5-47bf-86eb-871a3aa0dcd9", 00:16:52.387 "strip_size_kb": 64, 00:16:52.387 "state": "online", 00:16:52.387 "raid_level": "raid5f", 00:16:52.387 "superblock": true, 00:16:52.387 "num_base_bdevs": 4, 00:16:52.387 "num_base_bdevs_discovered": 4, 00:16:52.387 "num_base_bdevs_operational": 4, 00:16:52.387 "base_bdevs_list": [ 00:16:52.387 { 00:16:52.387 "name": "BaseBdev1", 00:16:52.387 "uuid": "f27eb7c6-8440-4ad0-90d8-5f46f5515376", 00:16:52.387 "is_configured": true, 00:16:52.387 "data_offset": 2048, 00:16:52.387 "data_size": 63488 00:16:52.387 }, 00:16:52.387 { 00:16:52.387 "name": "BaseBdev2", 00:16:52.387 "uuid": "3373f524-54a2-4db2-8903-aef3dfaa5d08", 00:16:52.387 "is_configured": true, 00:16:52.387 "data_offset": 2048, 00:16:52.387 "data_size": 63488 00:16:52.387 }, 00:16:52.387 { 00:16:52.387 "name": "BaseBdev3", 00:16:52.387 "uuid": "89d0ef29-8f6f-4455-8862-4b13819744df", 00:16:52.387 "is_configured": true, 00:16:52.387 "data_offset": 2048, 00:16:52.387 "data_size": 63488 00:16:52.387 }, 00:16:52.387 { 00:16:52.387 "name": "BaseBdev4", 00:16:52.387 "uuid": "2f3753a3-f757-46d6-bdf8-35440b4cf60e", 00:16:52.387 "is_configured": true, 00:16:52.387 "data_offset": 2048, 00:16:52.387 "data_size": 63488 00:16:52.387 } 00:16:52.387 ] 00:16:52.387 } 00:16:52.387 } 00:16:52.387 }' 00:16:52.387 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:52.387 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:52.387 BaseBdev2 00:16:52.387 BaseBdev3 00:16:52.387 BaseBdev4' 00:16:52.387 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.644 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.644 [2024-11-15 10:45:13.784121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.901 "name": "Existed_Raid", 00:16:52.901 "uuid": "7412ba85-1cc5-47bf-86eb-871a3aa0dcd9", 00:16:52.901 "strip_size_kb": 64, 00:16:52.901 "state": "online", 00:16:52.901 "raid_level": "raid5f", 00:16:52.901 "superblock": true, 00:16:52.901 "num_base_bdevs": 4, 00:16:52.901 "num_base_bdevs_discovered": 3, 00:16:52.901 "num_base_bdevs_operational": 3, 00:16:52.901 "base_bdevs_list": [ 00:16:52.901 { 00:16:52.901 "name": null, 00:16:52.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.901 "is_configured": false, 00:16:52.901 "data_offset": 0, 00:16:52.901 "data_size": 63488 00:16:52.901 }, 00:16:52.901 { 00:16:52.901 "name": "BaseBdev2", 00:16:52.901 "uuid": "3373f524-54a2-4db2-8903-aef3dfaa5d08", 00:16:52.901 "is_configured": true, 00:16:52.901 "data_offset": 2048, 00:16:52.901 "data_size": 63488 00:16:52.901 }, 00:16:52.901 { 00:16:52.901 "name": "BaseBdev3", 00:16:52.901 "uuid": "89d0ef29-8f6f-4455-8862-4b13819744df", 00:16:52.901 "is_configured": true, 00:16:52.901 "data_offset": 2048, 00:16:52.901 "data_size": 63488 00:16:52.901 }, 00:16:52.901 { 00:16:52.901 "name": "BaseBdev4", 00:16:52.901 "uuid": "2f3753a3-f757-46d6-bdf8-35440b4cf60e", 00:16:52.901 "is_configured": true, 00:16:52.901 "data_offset": 2048, 00:16:52.901 "data_size": 63488 00:16:52.901 } 00:16:52.901 ] 00:16:52.901 }' 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.901 10:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.468 [2024-11-15 10:45:14.466340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:53.468 [2024-11-15 10:45:14.466741] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.468 [2024-11-15 10:45:14.556775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.468 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.468 [2024-11-15 10:45:14.624876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.726 [2024-11-15 10:45:14.779832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:53.726 [2024-11-15 10:45:14.780048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:53.726 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.985 BaseBdev2 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.985 10:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.985 [ 00:16:53.985 { 00:16:53.985 "name": "BaseBdev2", 00:16:53.985 "aliases": [ 00:16:53.985 "8bde9c26-48b9-4fa6-b71d-17919901d79d" 00:16:53.985 ], 00:16:53.985 "product_name": "Malloc disk", 00:16:53.985 "block_size": 512, 00:16:53.985 "num_blocks": 65536, 00:16:53.985 "uuid": "8bde9c26-48b9-4fa6-b71d-17919901d79d", 00:16:53.985 "assigned_rate_limits": { 00:16:53.985 "rw_ios_per_sec": 0, 00:16:53.985 "rw_mbytes_per_sec": 0, 00:16:53.985 "r_mbytes_per_sec": 0, 00:16:53.985 "w_mbytes_per_sec": 0 00:16:53.985 }, 00:16:53.985 "claimed": false, 00:16:53.985 "zoned": false, 00:16:53.985 "supported_io_types": { 00:16:53.985 "read": true, 00:16:53.985 "write": true, 00:16:53.985 "unmap": true, 00:16:53.985 "flush": true, 00:16:53.985 "reset": true, 00:16:53.985 "nvme_admin": false, 00:16:53.985 "nvme_io": false, 00:16:53.985 "nvme_io_md": false, 00:16:53.985 "write_zeroes": true, 00:16:53.985 "zcopy": true, 00:16:53.985 "get_zone_info": false, 00:16:53.985 "zone_management": false, 00:16:53.985 "zone_append": false, 00:16:53.985 "compare": false, 00:16:53.985 "compare_and_write": false, 00:16:53.985 "abort": true, 00:16:53.985 "seek_hole": false, 00:16:53.985 "seek_data": false, 00:16:53.985 "copy": true, 00:16:53.985 "nvme_iov_md": false 00:16:53.985 }, 00:16:53.985 "memory_domains": [ 00:16:53.985 { 00:16:53.985 "dma_device_id": "system", 00:16:53.985 "dma_device_type": 1 00:16:53.985 }, 00:16:53.985 { 00:16:53.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.985 "dma_device_type": 2 00:16:53.985 } 00:16:53.985 ], 00:16:53.985 "driver_specific": {} 00:16:53.985 } 00:16:53.985 ] 00:16:53.985 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.985 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:53.985 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:53.985 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.986 BaseBdev3 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.986 [ 00:16:53.986 { 00:16:53.986 "name": "BaseBdev3", 00:16:53.986 "aliases": [ 00:16:53.986 "6bb8011d-6320-4eba-8500-930597caede5" 00:16:53.986 ], 00:16:53.986 "product_name": "Malloc disk", 00:16:53.986 "block_size": 512, 00:16:53.986 "num_blocks": 65536, 00:16:53.986 "uuid": "6bb8011d-6320-4eba-8500-930597caede5", 00:16:53.986 "assigned_rate_limits": { 00:16:53.986 "rw_ios_per_sec": 0, 00:16:53.986 "rw_mbytes_per_sec": 0, 00:16:53.986 "r_mbytes_per_sec": 0, 00:16:53.986 "w_mbytes_per_sec": 0 00:16:53.986 }, 00:16:53.986 "claimed": false, 00:16:53.986 "zoned": false, 00:16:53.986 "supported_io_types": { 00:16:53.986 "read": true, 00:16:53.986 "write": true, 00:16:53.986 "unmap": true, 00:16:53.986 "flush": true, 00:16:53.986 "reset": true, 00:16:53.986 "nvme_admin": false, 00:16:53.986 "nvme_io": false, 00:16:53.986 "nvme_io_md": false, 00:16:53.986 "write_zeroes": true, 00:16:53.986 "zcopy": true, 00:16:53.986 "get_zone_info": false, 00:16:53.986 "zone_management": false, 00:16:53.986 "zone_append": false, 00:16:53.986 "compare": false, 00:16:53.986 "compare_and_write": false, 00:16:53.986 "abort": true, 00:16:53.986 "seek_hole": false, 00:16:53.986 "seek_data": false, 00:16:53.986 "copy": true, 00:16:53.986 "nvme_iov_md": false 00:16:53.986 }, 00:16:53.986 "memory_domains": [ 00:16:53.986 { 00:16:53.986 "dma_device_id": "system", 00:16:53.986 "dma_device_type": 1 00:16:53.986 }, 00:16:53.986 { 00:16:53.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.986 "dma_device_type": 2 00:16:53.986 } 00:16:53.986 ], 00:16:53.986 "driver_specific": {} 00:16:53.986 } 00:16:53.986 ] 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.986 BaseBdev4 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.986 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.986 [ 00:16:53.986 { 00:16:53.986 "name": "BaseBdev4", 00:16:53.986 "aliases": [ 00:16:53.986 "872d229f-4df5-48b2-a840-55b5324f20c4" 00:16:53.986 ], 00:16:53.986 "product_name": "Malloc disk", 00:16:53.986 "block_size": 512, 00:16:53.986 "num_blocks": 65536, 00:16:53.986 "uuid": "872d229f-4df5-48b2-a840-55b5324f20c4", 00:16:53.986 "assigned_rate_limits": { 00:16:53.986 "rw_ios_per_sec": 0, 00:16:53.986 "rw_mbytes_per_sec": 0, 00:16:53.986 "r_mbytes_per_sec": 0, 00:16:53.986 "w_mbytes_per_sec": 0 00:16:53.986 }, 00:16:53.986 "claimed": false, 00:16:53.986 "zoned": false, 00:16:53.986 "supported_io_types": { 00:16:53.986 "read": true, 00:16:53.986 "write": true, 00:16:53.986 "unmap": true, 00:16:53.986 "flush": true, 00:16:53.986 "reset": true, 00:16:53.986 "nvme_admin": false, 00:16:53.986 "nvme_io": false, 00:16:53.986 "nvme_io_md": false, 00:16:53.986 "write_zeroes": true, 00:16:53.986 "zcopy": true, 00:16:53.986 "get_zone_info": false, 00:16:53.986 "zone_management": false, 00:16:53.986 "zone_append": false, 00:16:53.986 "compare": false, 00:16:53.986 "compare_and_write": false, 00:16:53.986 "abort": true, 00:16:53.986 "seek_hole": false, 00:16:53.986 "seek_data": false, 00:16:53.986 "copy": true, 00:16:53.986 "nvme_iov_md": false 00:16:54.245 }, 00:16:54.245 "memory_domains": [ 00:16:54.245 { 00:16:54.245 "dma_device_id": "system", 00:16:54.245 "dma_device_type": 1 00:16:54.245 }, 00:16:54.245 { 00:16:54.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.245 "dma_device_type": 2 00:16:54.245 } 00:16:54.245 ], 00:16:54.245 "driver_specific": {} 00:16:54.245 } 00:16:54.245 ] 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.245 [2024-11-15 10:45:15.151597] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.245 [2024-11-15 10:45:15.151809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.245 [2024-11-15 10:45:15.151984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:54.245 [2024-11-15 10:45:15.154697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.245 [2024-11-15 10:45:15.154953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.245 "name": "Existed_Raid", 00:16:54.245 "uuid": "a660e41c-9383-4e4d-ab96-4efe4279bacb", 00:16:54.245 "strip_size_kb": 64, 00:16:54.245 "state": "configuring", 00:16:54.245 "raid_level": "raid5f", 00:16:54.245 "superblock": true, 00:16:54.245 "num_base_bdevs": 4, 00:16:54.245 "num_base_bdevs_discovered": 3, 00:16:54.245 "num_base_bdevs_operational": 4, 00:16:54.245 "base_bdevs_list": [ 00:16:54.245 { 00:16:54.245 "name": "BaseBdev1", 00:16:54.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.245 "is_configured": false, 00:16:54.245 "data_offset": 0, 00:16:54.245 "data_size": 0 00:16:54.245 }, 00:16:54.245 { 00:16:54.245 "name": "BaseBdev2", 00:16:54.245 "uuid": "8bde9c26-48b9-4fa6-b71d-17919901d79d", 00:16:54.245 "is_configured": true, 00:16:54.245 "data_offset": 2048, 00:16:54.245 "data_size": 63488 00:16:54.245 }, 00:16:54.245 { 00:16:54.245 "name": "BaseBdev3", 00:16:54.245 "uuid": "6bb8011d-6320-4eba-8500-930597caede5", 00:16:54.245 "is_configured": true, 00:16:54.245 "data_offset": 2048, 00:16:54.245 "data_size": 63488 00:16:54.245 }, 00:16:54.245 { 00:16:54.245 "name": "BaseBdev4", 00:16:54.245 "uuid": "872d229f-4df5-48b2-a840-55b5324f20c4", 00:16:54.245 "is_configured": true, 00:16:54.245 "data_offset": 2048, 00:16:54.245 "data_size": 63488 00:16:54.245 } 00:16:54.245 ] 00:16:54.245 }' 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.245 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.813 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.814 [2024-11-15 10:45:15.679745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.814 "name": "Existed_Raid", 00:16:54.814 "uuid": "a660e41c-9383-4e4d-ab96-4efe4279bacb", 00:16:54.814 "strip_size_kb": 64, 00:16:54.814 "state": "configuring", 00:16:54.814 "raid_level": "raid5f", 00:16:54.814 "superblock": true, 00:16:54.814 "num_base_bdevs": 4, 00:16:54.814 "num_base_bdevs_discovered": 2, 00:16:54.814 "num_base_bdevs_operational": 4, 00:16:54.814 "base_bdevs_list": [ 00:16:54.814 { 00:16:54.814 "name": "BaseBdev1", 00:16:54.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.814 "is_configured": false, 00:16:54.814 "data_offset": 0, 00:16:54.814 "data_size": 0 00:16:54.814 }, 00:16:54.814 { 00:16:54.814 "name": null, 00:16:54.814 "uuid": "8bde9c26-48b9-4fa6-b71d-17919901d79d", 00:16:54.814 "is_configured": false, 00:16:54.814 "data_offset": 0, 00:16:54.814 "data_size": 63488 00:16:54.814 }, 00:16:54.814 { 00:16:54.814 "name": "BaseBdev3", 00:16:54.814 "uuid": "6bb8011d-6320-4eba-8500-930597caede5", 00:16:54.814 "is_configured": true, 00:16:54.814 "data_offset": 2048, 00:16:54.814 "data_size": 63488 00:16:54.814 }, 00:16:54.814 { 00:16:54.814 "name": "BaseBdev4", 00:16:54.814 "uuid": "872d229f-4df5-48b2-a840-55b5324f20c4", 00:16:54.814 "is_configured": true, 00:16:54.814 "data_offset": 2048, 00:16:54.814 "data_size": 63488 00:16:54.814 } 00:16:54.814 ] 00:16:54.814 }' 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.814 10:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.073 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.073 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:55.073 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.073 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.073 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.331 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:55.331 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:55.331 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.332 [2024-11-15 10:45:16.297808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.332 BaseBdev1 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.332 [ 00:16:55.332 { 00:16:55.332 "name": "BaseBdev1", 00:16:55.332 "aliases": [ 00:16:55.332 "64a2681a-ba68-48d2-b1f6-12ead915c255" 00:16:55.332 ], 00:16:55.332 "product_name": "Malloc disk", 00:16:55.332 "block_size": 512, 00:16:55.332 "num_blocks": 65536, 00:16:55.332 "uuid": "64a2681a-ba68-48d2-b1f6-12ead915c255", 00:16:55.332 "assigned_rate_limits": { 00:16:55.332 "rw_ios_per_sec": 0, 00:16:55.332 "rw_mbytes_per_sec": 0, 00:16:55.332 "r_mbytes_per_sec": 0, 00:16:55.332 "w_mbytes_per_sec": 0 00:16:55.332 }, 00:16:55.332 "claimed": true, 00:16:55.332 "claim_type": "exclusive_write", 00:16:55.332 "zoned": false, 00:16:55.332 "supported_io_types": { 00:16:55.332 "read": true, 00:16:55.332 "write": true, 00:16:55.332 "unmap": true, 00:16:55.332 "flush": true, 00:16:55.332 "reset": true, 00:16:55.332 "nvme_admin": false, 00:16:55.332 "nvme_io": false, 00:16:55.332 "nvme_io_md": false, 00:16:55.332 "write_zeroes": true, 00:16:55.332 "zcopy": true, 00:16:55.332 "get_zone_info": false, 00:16:55.332 "zone_management": false, 00:16:55.332 "zone_append": false, 00:16:55.332 "compare": false, 00:16:55.332 "compare_and_write": false, 00:16:55.332 "abort": true, 00:16:55.332 "seek_hole": false, 00:16:55.332 "seek_data": false, 00:16:55.332 "copy": true, 00:16:55.332 "nvme_iov_md": false 00:16:55.332 }, 00:16:55.332 "memory_domains": [ 00:16:55.332 { 00:16:55.332 "dma_device_id": "system", 00:16:55.332 "dma_device_type": 1 00:16:55.332 }, 00:16:55.332 { 00:16:55.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.332 "dma_device_type": 2 00:16:55.332 } 00:16:55.332 ], 00:16:55.332 "driver_specific": {} 00:16:55.332 } 00:16:55.332 ] 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.332 "name": "Existed_Raid", 00:16:55.332 "uuid": "a660e41c-9383-4e4d-ab96-4efe4279bacb", 00:16:55.332 "strip_size_kb": 64, 00:16:55.332 "state": "configuring", 00:16:55.332 "raid_level": "raid5f", 00:16:55.332 "superblock": true, 00:16:55.332 "num_base_bdevs": 4, 00:16:55.332 "num_base_bdevs_discovered": 3, 00:16:55.332 "num_base_bdevs_operational": 4, 00:16:55.332 "base_bdevs_list": [ 00:16:55.332 { 00:16:55.332 "name": "BaseBdev1", 00:16:55.332 "uuid": "64a2681a-ba68-48d2-b1f6-12ead915c255", 00:16:55.332 "is_configured": true, 00:16:55.332 "data_offset": 2048, 00:16:55.332 "data_size": 63488 00:16:55.332 }, 00:16:55.332 { 00:16:55.332 "name": null, 00:16:55.332 "uuid": "8bde9c26-48b9-4fa6-b71d-17919901d79d", 00:16:55.332 "is_configured": false, 00:16:55.332 "data_offset": 0, 00:16:55.332 "data_size": 63488 00:16:55.332 }, 00:16:55.332 { 00:16:55.332 "name": "BaseBdev3", 00:16:55.332 "uuid": "6bb8011d-6320-4eba-8500-930597caede5", 00:16:55.332 "is_configured": true, 00:16:55.332 "data_offset": 2048, 00:16:55.332 "data_size": 63488 00:16:55.332 }, 00:16:55.332 { 00:16:55.332 "name": "BaseBdev4", 00:16:55.332 "uuid": "872d229f-4df5-48b2-a840-55b5324f20c4", 00:16:55.332 "is_configured": true, 00:16:55.332 "data_offset": 2048, 00:16:55.332 "data_size": 63488 00:16:55.332 } 00:16:55.332 ] 00:16:55.332 }' 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.332 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.898 [2024-11-15 10:45:16.886028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.898 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.899 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.899 "name": "Existed_Raid", 00:16:55.899 "uuid": "a660e41c-9383-4e4d-ab96-4efe4279bacb", 00:16:55.899 "strip_size_kb": 64, 00:16:55.899 "state": "configuring", 00:16:55.899 "raid_level": "raid5f", 00:16:55.899 "superblock": true, 00:16:55.899 "num_base_bdevs": 4, 00:16:55.899 "num_base_bdevs_discovered": 2, 00:16:55.899 "num_base_bdevs_operational": 4, 00:16:55.899 "base_bdevs_list": [ 00:16:55.899 { 00:16:55.899 "name": "BaseBdev1", 00:16:55.899 "uuid": "64a2681a-ba68-48d2-b1f6-12ead915c255", 00:16:55.899 "is_configured": true, 00:16:55.899 "data_offset": 2048, 00:16:55.899 "data_size": 63488 00:16:55.899 }, 00:16:55.899 { 00:16:55.899 "name": null, 00:16:55.899 "uuid": "8bde9c26-48b9-4fa6-b71d-17919901d79d", 00:16:55.899 "is_configured": false, 00:16:55.899 "data_offset": 0, 00:16:55.899 "data_size": 63488 00:16:55.899 }, 00:16:55.899 { 00:16:55.899 "name": null, 00:16:55.899 "uuid": "6bb8011d-6320-4eba-8500-930597caede5", 00:16:55.899 "is_configured": false, 00:16:55.899 "data_offset": 0, 00:16:55.899 "data_size": 63488 00:16:55.899 }, 00:16:55.899 { 00:16:55.899 "name": "BaseBdev4", 00:16:55.899 "uuid": "872d229f-4df5-48b2-a840-55b5324f20c4", 00:16:55.899 "is_configured": true, 00:16:55.899 "data_offset": 2048, 00:16:55.899 "data_size": 63488 00:16:55.899 } 00:16:55.899 ] 00:16:55.899 }' 00:16:55.899 10:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.899 10:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.465 [2024-11-15 10:45:17.458153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.465 "name": "Existed_Raid", 00:16:56.465 "uuid": "a660e41c-9383-4e4d-ab96-4efe4279bacb", 00:16:56.465 "strip_size_kb": 64, 00:16:56.465 "state": "configuring", 00:16:56.465 "raid_level": "raid5f", 00:16:56.465 "superblock": true, 00:16:56.465 "num_base_bdevs": 4, 00:16:56.465 "num_base_bdevs_discovered": 3, 00:16:56.465 "num_base_bdevs_operational": 4, 00:16:56.465 "base_bdevs_list": [ 00:16:56.465 { 00:16:56.465 "name": "BaseBdev1", 00:16:56.465 "uuid": "64a2681a-ba68-48d2-b1f6-12ead915c255", 00:16:56.465 "is_configured": true, 00:16:56.465 "data_offset": 2048, 00:16:56.465 "data_size": 63488 00:16:56.465 }, 00:16:56.465 { 00:16:56.465 "name": null, 00:16:56.465 "uuid": "8bde9c26-48b9-4fa6-b71d-17919901d79d", 00:16:56.465 "is_configured": false, 00:16:56.465 "data_offset": 0, 00:16:56.465 "data_size": 63488 00:16:56.465 }, 00:16:56.465 { 00:16:56.465 "name": "BaseBdev3", 00:16:56.465 "uuid": "6bb8011d-6320-4eba-8500-930597caede5", 00:16:56.465 "is_configured": true, 00:16:56.465 "data_offset": 2048, 00:16:56.465 "data_size": 63488 00:16:56.465 }, 00:16:56.465 { 00:16:56.465 "name": "BaseBdev4", 00:16:56.465 "uuid": "872d229f-4df5-48b2-a840-55b5324f20c4", 00:16:56.465 "is_configured": true, 00:16:56.465 "data_offset": 2048, 00:16:56.465 "data_size": 63488 00:16:56.465 } 00:16:56.465 ] 00:16:56.465 }' 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.465 10:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.032 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:57.032 10:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.032 10:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.032 10:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.032 10:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.032 [2024-11-15 10:45:18.034347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.032 "name": "Existed_Raid", 00:16:57.032 "uuid": "a660e41c-9383-4e4d-ab96-4efe4279bacb", 00:16:57.032 "strip_size_kb": 64, 00:16:57.032 "state": "configuring", 00:16:57.032 "raid_level": "raid5f", 00:16:57.032 "superblock": true, 00:16:57.032 "num_base_bdevs": 4, 00:16:57.032 "num_base_bdevs_discovered": 2, 00:16:57.032 "num_base_bdevs_operational": 4, 00:16:57.032 "base_bdevs_list": [ 00:16:57.032 { 00:16:57.032 "name": null, 00:16:57.032 "uuid": "64a2681a-ba68-48d2-b1f6-12ead915c255", 00:16:57.032 "is_configured": false, 00:16:57.032 "data_offset": 0, 00:16:57.032 "data_size": 63488 00:16:57.032 }, 00:16:57.032 { 00:16:57.032 "name": null, 00:16:57.032 "uuid": "8bde9c26-48b9-4fa6-b71d-17919901d79d", 00:16:57.032 "is_configured": false, 00:16:57.032 "data_offset": 0, 00:16:57.032 "data_size": 63488 00:16:57.032 }, 00:16:57.032 { 00:16:57.032 "name": "BaseBdev3", 00:16:57.032 "uuid": "6bb8011d-6320-4eba-8500-930597caede5", 00:16:57.032 "is_configured": true, 00:16:57.032 "data_offset": 2048, 00:16:57.032 "data_size": 63488 00:16:57.032 }, 00:16:57.032 { 00:16:57.032 "name": "BaseBdev4", 00:16:57.032 "uuid": "872d229f-4df5-48b2-a840-55b5324f20c4", 00:16:57.032 "is_configured": true, 00:16:57.032 "data_offset": 2048, 00:16:57.032 "data_size": 63488 00:16:57.032 } 00:16:57.032 ] 00:16:57.032 }' 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.032 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.598 [2024-11-15 10:45:18.674921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.598 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.599 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.599 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.599 "name": "Existed_Raid", 00:16:57.599 "uuid": "a660e41c-9383-4e4d-ab96-4efe4279bacb", 00:16:57.599 "strip_size_kb": 64, 00:16:57.599 "state": "configuring", 00:16:57.599 "raid_level": "raid5f", 00:16:57.599 "superblock": true, 00:16:57.599 "num_base_bdevs": 4, 00:16:57.599 "num_base_bdevs_discovered": 3, 00:16:57.599 "num_base_bdevs_operational": 4, 00:16:57.599 "base_bdevs_list": [ 00:16:57.599 { 00:16:57.599 "name": null, 00:16:57.599 "uuid": "64a2681a-ba68-48d2-b1f6-12ead915c255", 00:16:57.599 "is_configured": false, 00:16:57.599 "data_offset": 0, 00:16:57.599 "data_size": 63488 00:16:57.599 }, 00:16:57.599 { 00:16:57.599 "name": "BaseBdev2", 00:16:57.599 "uuid": "8bde9c26-48b9-4fa6-b71d-17919901d79d", 00:16:57.599 "is_configured": true, 00:16:57.599 "data_offset": 2048, 00:16:57.599 "data_size": 63488 00:16:57.599 }, 00:16:57.599 { 00:16:57.599 "name": "BaseBdev3", 00:16:57.599 "uuid": "6bb8011d-6320-4eba-8500-930597caede5", 00:16:57.599 "is_configured": true, 00:16:57.599 "data_offset": 2048, 00:16:57.599 "data_size": 63488 00:16:57.599 }, 00:16:57.599 { 00:16:57.599 "name": "BaseBdev4", 00:16:57.599 "uuid": "872d229f-4df5-48b2-a840-55b5324f20c4", 00:16:57.599 "is_configured": true, 00:16:57.599 "data_offset": 2048, 00:16:57.599 "data_size": 63488 00:16:57.599 } 00:16:57.599 ] 00:16:57.599 }' 00:16:57.599 10:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.599 10:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.165 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:58.165 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.165 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.165 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.165 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.165 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:58.165 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:58.165 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.165 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.165 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.165 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.165 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 64a2681a-ba68-48d2-b1f6-12ead915c255 00:16:58.165 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.165 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.165 [2024-11-15 10:45:19.320534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:58.165 [2024-11-15 10:45:19.320997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:58.166 [2024-11-15 10:45:19.321137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:58.166 NewBaseBdev 00:16:58.166 [2024-11-15 10:45:19.321515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:58.166 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.166 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:58.166 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:58.166 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:58.166 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:58.166 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:58.166 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:58.166 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:58.166 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.166 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.429 [2024-11-15 10:45:19.328048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:58.429 [2024-11-15 10:45:19.328190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:58.429 [2024-11-15 10:45:19.328629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.430 [ 00:16:58.430 { 00:16:58.430 "name": "NewBaseBdev", 00:16:58.430 "aliases": [ 00:16:58.430 "64a2681a-ba68-48d2-b1f6-12ead915c255" 00:16:58.430 ], 00:16:58.430 "product_name": "Malloc disk", 00:16:58.430 "block_size": 512, 00:16:58.430 "num_blocks": 65536, 00:16:58.430 "uuid": "64a2681a-ba68-48d2-b1f6-12ead915c255", 00:16:58.430 "assigned_rate_limits": { 00:16:58.430 "rw_ios_per_sec": 0, 00:16:58.430 "rw_mbytes_per_sec": 0, 00:16:58.430 "r_mbytes_per_sec": 0, 00:16:58.430 "w_mbytes_per_sec": 0 00:16:58.430 }, 00:16:58.430 "claimed": true, 00:16:58.430 "claim_type": "exclusive_write", 00:16:58.430 "zoned": false, 00:16:58.430 "supported_io_types": { 00:16:58.430 "read": true, 00:16:58.430 "write": true, 00:16:58.430 "unmap": true, 00:16:58.430 "flush": true, 00:16:58.430 "reset": true, 00:16:58.430 "nvme_admin": false, 00:16:58.430 "nvme_io": false, 00:16:58.430 "nvme_io_md": false, 00:16:58.430 "write_zeroes": true, 00:16:58.430 "zcopy": true, 00:16:58.430 "get_zone_info": false, 00:16:58.430 "zone_management": false, 00:16:58.430 "zone_append": false, 00:16:58.430 "compare": false, 00:16:58.430 "compare_and_write": false, 00:16:58.430 "abort": true, 00:16:58.430 "seek_hole": false, 00:16:58.430 "seek_data": false, 00:16:58.430 "copy": true, 00:16:58.430 "nvme_iov_md": false 00:16:58.430 }, 00:16:58.430 "memory_domains": [ 00:16:58.430 { 00:16:58.430 "dma_device_id": "system", 00:16:58.430 "dma_device_type": 1 00:16:58.430 }, 00:16:58.430 { 00:16:58.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.430 "dma_device_type": 2 00:16:58.430 } 00:16:58.430 ], 00:16:58.430 "driver_specific": {} 00:16:58.430 } 00:16:58.430 ] 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.430 "name": "Existed_Raid", 00:16:58.430 "uuid": "a660e41c-9383-4e4d-ab96-4efe4279bacb", 00:16:58.430 "strip_size_kb": 64, 00:16:58.430 "state": "online", 00:16:58.430 "raid_level": "raid5f", 00:16:58.430 "superblock": true, 00:16:58.430 "num_base_bdevs": 4, 00:16:58.430 "num_base_bdevs_discovered": 4, 00:16:58.430 "num_base_bdevs_operational": 4, 00:16:58.430 "base_bdevs_list": [ 00:16:58.430 { 00:16:58.430 "name": "NewBaseBdev", 00:16:58.430 "uuid": "64a2681a-ba68-48d2-b1f6-12ead915c255", 00:16:58.430 "is_configured": true, 00:16:58.430 "data_offset": 2048, 00:16:58.430 "data_size": 63488 00:16:58.430 }, 00:16:58.430 { 00:16:58.430 "name": "BaseBdev2", 00:16:58.430 "uuid": "8bde9c26-48b9-4fa6-b71d-17919901d79d", 00:16:58.430 "is_configured": true, 00:16:58.430 "data_offset": 2048, 00:16:58.430 "data_size": 63488 00:16:58.430 }, 00:16:58.430 { 00:16:58.430 "name": "BaseBdev3", 00:16:58.430 "uuid": "6bb8011d-6320-4eba-8500-930597caede5", 00:16:58.430 "is_configured": true, 00:16:58.430 "data_offset": 2048, 00:16:58.430 "data_size": 63488 00:16:58.430 }, 00:16:58.430 { 00:16:58.430 "name": "BaseBdev4", 00:16:58.430 "uuid": "872d229f-4df5-48b2-a840-55b5324f20c4", 00:16:58.430 "is_configured": true, 00:16:58.430 "data_offset": 2048, 00:16:58.430 "data_size": 63488 00:16:58.430 } 00:16:58.430 ] 00:16:58.430 }' 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.430 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.695 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:58.695 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:58.695 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:58.695 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.695 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.695 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.695 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:58.695 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.695 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.695 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.695 [2024-11-15 10:45:19.844448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.953 10:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.953 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.953 "name": "Existed_Raid", 00:16:58.953 "aliases": [ 00:16:58.953 "a660e41c-9383-4e4d-ab96-4efe4279bacb" 00:16:58.953 ], 00:16:58.953 "product_name": "Raid Volume", 00:16:58.953 "block_size": 512, 00:16:58.953 "num_blocks": 190464, 00:16:58.953 "uuid": "a660e41c-9383-4e4d-ab96-4efe4279bacb", 00:16:58.953 "assigned_rate_limits": { 00:16:58.953 "rw_ios_per_sec": 0, 00:16:58.953 "rw_mbytes_per_sec": 0, 00:16:58.953 "r_mbytes_per_sec": 0, 00:16:58.953 "w_mbytes_per_sec": 0 00:16:58.953 }, 00:16:58.953 "claimed": false, 00:16:58.953 "zoned": false, 00:16:58.953 "supported_io_types": { 00:16:58.953 "read": true, 00:16:58.953 "write": true, 00:16:58.953 "unmap": false, 00:16:58.953 "flush": false, 00:16:58.953 "reset": true, 00:16:58.953 "nvme_admin": false, 00:16:58.953 "nvme_io": false, 00:16:58.953 "nvme_io_md": false, 00:16:58.953 "write_zeroes": true, 00:16:58.953 "zcopy": false, 00:16:58.953 "get_zone_info": false, 00:16:58.953 "zone_management": false, 00:16:58.953 "zone_append": false, 00:16:58.953 "compare": false, 00:16:58.953 "compare_and_write": false, 00:16:58.953 "abort": false, 00:16:58.953 "seek_hole": false, 00:16:58.953 "seek_data": false, 00:16:58.953 "copy": false, 00:16:58.953 "nvme_iov_md": false 00:16:58.953 }, 00:16:58.953 "driver_specific": { 00:16:58.953 "raid": { 00:16:58.953 "uuid": "a660e41c-9383-4e4d-ab96-4efe4279bacb", 00:16:58.953 "strip_size_kb": 64, 00:16:58.953 "state": "online", 00:16:58.953 "raid_level": "raid5f", 00:16:58.953 "superblock": true, 00:16:58.953 "num_base_bdevs": 4, 00:16:58.953 "num_base_bdevs_discovered": 4, 00:16:58.953 "num_base_bdevs_operational": 4, 00:16:58.953 "base_bdevs_list": [ 00:16:58.953 { 00:16:58.953 "name": "NewBaseBdev", 00:16:58.953 "uuid": "64a2681a-ba68-48d2-b1f6-12ead915c255", 00:16:58.953 "is_configured": true, 00:16:58.953 "data_offset": 2048, 00:16:58.953 "data_size": 63488 00:16:58.953 }, 00:16:58.953 { 00:16:58.953 "name": "BaseBdev2", 00:16:58.954 "uuid": "8bde9c26-48b9-4fa6-b71d-17919901d79d", 00:16:58.954 "is_configured": true, 00:16:58.954 "data_offset": 2048, 00:16:58.954 "data_size": 63488 00:16:58.954 }, 00:16:58.954 { 00:16:58.954 "name": "BaseBdev3", 00:16:58.954 "uuid": "6bb8011d-6320-4eba-8500-930597caede5", 00:16:58.954 "is_configured": true, 00:16:58.954 "data_offset": 2048, 00:16:58.954 "data_size": 63488 00:16:58.954 }, 00:16:58.954 { 00:16:58.954 "name": "BaseBdev4", 00:16:58.954 "uuid": "872d229f-4df5-48b2-a840-55b5324f20c4", 00:16:58.954 "is_configured": true, 00:16:58.954 "data_offset": 2048, 00:16:58.954 "data_size": 63488 00:16:58.954 } 00:16:58.954 ] 00:16:58.954 } 00:16:58.954 } 00:16:58.954 }' 00:16:58.954 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.954 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:58.954 BaseBdev2 00:16:58.954 BaseBdev3 00:16:58.954 BaseBdev4' 00:16:58.954 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.954 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:58.954 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.954 10:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.954 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.212 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.212 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.212 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.212 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.212 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:59.212 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.212 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.212 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.212 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.212 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:59.212 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.213 [2024-11-15 10:45:20.216244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:59.213 [2024-11-15 10:45:20.216396] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.213 [2024-11-15 10:45:20.216631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.213 [2024-11-15 10:45:20.217029] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.213 [2024-11-15 10:45:20.217048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83796 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83796 ']' 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83796 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83796 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83796' 00:16:59.213 killing process with pid 83796 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83796 00:16:59.213 [2024-11-15 10:45:20.252729] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:59.213 10:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83796 00:16:59.471 [2024-11-15 10:45:20.607825] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.846 10:45:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:00.846 00:17:00.846 real 0m12.851s 00:17:00.846 user 0m21.427s 00:17:00.846 sys 0m1.695s 00:17:00.846 10:45:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.846 10:45:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.846 ************************************ 00:17:00.846 END TEST raid5f_state_function_test_sb 00:17:00.846 ************************************ 00:17:00.846 10:45:21 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:00.846 10:45:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:00.846 10:45:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.846 10:45:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.846 ************************************ 00:17:00.846 START TEST raid5f_superblock_test 00:17:00.846 ************************************ 00:17:00.846 10:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:17:00.846 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:00.846 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:00.846 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:00.846 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84474 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84474 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84474 ']' 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.847 10:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.847 [2024-11-15 10:45:21.791481] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:17:00.847 [2024-11-15 10:45:21.791678] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84474 ] 00:17:00.847 [2024-11-15 10:45:21.980181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.105 [2024-11-15 10:45:22.131699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.364 [2024-11-15 10:45:22.341946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.364 [2024-11-15 10:45:22.342017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.624 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.624 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:01.624 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:01.624 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.624 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:01.624 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:01.624 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:01.624 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.624 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.624 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.624 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:01.624 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.624 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.883 malloc1 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.883 [2024-11-15 10:45:22.800785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:01.883 [2024-11-15 10:45:22.801002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.883 [2024-11-15 10:45:22.801047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:01.883 [2024-11-15 10:45:22.801064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.883 [2024-11-15 10:45:22.803786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.883 pt1 00:17:01.883 [2024-11-15 10:45:22.803951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.883 malloc2 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.883 [2024-11-15 10:45:22.855329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.883 [2024-11-15 10:45:22.855537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.883 [2024-11-15 10:45:22.855723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:01.883 [2024-11-15 10:45:22.855853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.883 [2024-11-15 10:45:22.858749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.883 [2024-11-15 10:45:22.858906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.883 pt2 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.883 malloc3 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.883 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.884 [2024-11-15 10:45:22.919907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:01.884 [2024-11-15 10:45:22.920094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.884 [2024-11-15 10:45:22.920173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:01.884 [2024-11-15 10:45:22.920283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.884 [2024-11-15 10:45:22.922996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.884 [2024-11-15 10:45:22.923150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:01.884 pt3 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.884 malloc4 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.884 [2024-11-15 10:45:22.971481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:01.884 [2024-11-15 10:45:22.971675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.884 [2024-11-15 10:45:22.971829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:01.884 [2024-11-15 10:45:22.971947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.884 [2024-11-15 10:45:22.974684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.884 [2024-11-15 10:45:22.974729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:01.884 pt4 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.884 [2024-11-15 10:45:22.979614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:01.884 [2024-11-15 10:45:22.982016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.884 [2024-11-15 10:45:22.982225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:01.884 [2024-11-15 10:45:22.982460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:01.884 [2024-11-15 10:45:22.982889] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:01.884 [2024-11-15 10:45:22.982920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:01.884 [2024-11-15 10:45:22.983236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:01.884 [2024-11-15 10:45:22.990024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:01.884 [2024-11-15 10:45:22.990161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:01.884 [2024-11-15 10:45:22.990607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.884 10:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.884 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.142 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.142 "name": "raid_bdev1", 00:17:02.142 "uuid": "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f", 00:17:02.142 "strip_size_kb": 64, 00:17:02.142 "state": "online", 00:17:02.142 "raid_level": "raid5f", 00:17:02.142 "superblock": true, 00:17:02.142 "num_base_bdevs": 4, 00:17:02.142 "num_base_bdevs_discovered": 4, 00:17:02.142 "num_base_bdevs_operational": 4, 00:17:02.142 "base_bdevs_list": [ 00:17:02.142 { 00:17:02.142 "name": "pt1", 00:17:02.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.142 "is_configured": true, 00:17:02.142 "data_offset": 2048, 00:17:02.142 "data_size": 63488 00:17:02.142 }, 00:17:02.142 { 00:17:02.142 "name": "pt2", 00:17:02.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.142 "is_configured": true, 00:17:02.142 "data_offset": 2048, 00:17:02.142 "data_size": 63488 00:17:02.142 }, 00:17:02.142 { 00:17:02.142 "name": "pt3", 00:17:02.142 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.142 "is_configured": true, 00:17:02.142 "data_offset": 2048, 00:17:02.142 "data_size": 63488 00:17:02.142 }, 00:17:02.142 { 00:17:02.142 "name": "pt4", 00:17:02.142 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:02.142 "is_configured": true, 00:17:02.142 "data_offset": 2048, 00:17:02.142 "data_size": 63488 00:17:02.142 } 00:17:02.142 ] 00:17:02.142 }' 00:17:02.142 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.142 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.400 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:02.400 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:02.400 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:02.400 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:02.400 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:02.400 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:02.400 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:02.400 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:02.400 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.400 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.400 [2024-11-15 10:45:23.494581] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.400 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.400 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:02.400 "name": "raid_bdev1", 00:17:02.400 "aliases": [ 00:17:02.400 "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f" 00:17:02.400 ], 00:17:02.400 "product_name": "Raid Volume", 00:17:02.400 "block_size": 512, 00:17:02.400 "num_blocks": 190464, 00:17:02.400 "uuid": "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f", 00:17:02.400 "assigned_rate_limits": { 00:17:02.400 "rw_ios_per_sec": 0, 00:17:02.400 "rw_mbytes_per_sec": 0, 00:17:02.400 "r_mbytes_per_sec": 0, 00:17:02.400 "w_mbytes_per_sec": 0 00:17:02.400 }, 00:17:02.400 "claimed": false, 00:17:02.400 "zoned": false, 00:17:02.400 "supported_io_types": { 00:17:02.400 "read": true, 00:17:02.400 "write": true, 00:17:02.400 "unmap": false, 00:17:02.400 "flush": false, 00:17:02.400 "reset": true, 00:17:02.400 "nvme_admin": false, 00:17:02.400 "nvme_io": false, 00:17:02.400 "nvme_io_md": false, 00:17:02.400 "write_zeroes": true, 00:17:02.400 "zcopy": false, 00:17:02.400 "get_zone_info": false, 00:17:02.400 "zone_management": false, 00:17:02.400 "zone_append": false, 00:17:02.400 "compare": false, 00:17:02.400 "compare_and_write": false, 00:17:02.400 "abort": false, 00:17:02.400 "seek_hole": false, 00:17:02.400 "seek_data": false, 00:17:02.400 "copy": false, 00:17:02.400 "nvme_iov_md": false 00:17:02.400 }, 00:17:02.400 "driver_specific": { 00:17:02.400 "raid": { 00:17:02.400 "uuid": "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f", 00:17:02.400 "strip_size_kb": 64, 00:17:02.400 "state": "online", 00:17:02.400 "raid_level": "raid5f", 00:17:02.400 "superblock": true, 00:17:02.400 "num_base_bdevs": 4, 00:17:02.401 "num_base_bdevs_discovered": 4, 00:17:02.401 "num_base_bdevs_operational": 4, 00:17:02.401 "base_bdevs_list": [ 00:17:02.401 { 00:17:02.401 "name": "pt1", 00:17:02.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.401 "is_configured": true, 00:17:02.401 "data_offset": 2048, 00:17:02.401 "data_size": 63488 00:17:02.401 }, 00:17:02.401 { 00:17:02.401 "name": "pt2", 00:17:02.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.401 "is_configured": true, 00:17:02.401 "data_offset": 2048, 00:17:02.401 "data_size": 63488 00:17:02.401 }, 00:17:02.401 { 00:17:02.401 "name": "pt3", 00:17:02.401 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.401 "is_configured": true, 00:17:02.401 "data_offset": 2048, 00:17:02.401 "data_size": 63488 00:17:02.401 }, 00:17:02.401 { 00:17:02.401 "name": "pt4", 00:17:02.401 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:02.401 "is_configured": true, 00:17:02.401 "data_offset": 2048, 00:17:02.401 "data_size": 63488 00:17:02.401 } 00:17:02.401 ] 00:17:02.401 } 00:17:02.401 } 00:17:02.401 }' 00:17:02.401 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:02.659 pt2 00:17:02.659 pt3 00:17:02.659 pt4' 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.659 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.660 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.918 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.918 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.918 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:02.918 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.918 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.918 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:02.918 [2024-11-15 10:45:23.846572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.918 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.918 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f 00:17:02.918 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f ']' 00:17:02.918 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.918 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.918 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.918 [2024-11-15 10:45:23.898362] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.918 [2024-11-15 10:45:23.898505] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.918 [2024-11-15 10:45:23.898703] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.919 [2024-11-15 10:45:23.898933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.919 [2024-11-15 10:45:23.899091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.919 10:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.919 [2024-11-15 10:45:24.046444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:02.919 [2024-11-15 10:45:24.048988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:02.919 [2024-11-15 10:45:24.049170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:02.919 [2024-11-15 10:45:24.049241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:02.919 [2024-11-15 10:45:24.049316] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:02.919 [2024-11-15 10:45:24.049385] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:02.919 [2024-11-15 10:45:24.049418] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:02.919 [2024-11-15 10:45:24.049448] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:02.919 [2024-11-15 10:45:24.049470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.919 [2024-11-15 10:45:24.049486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:02.919 request: 00:17:02.919 { 00:17:02.919 "name": "raid_bdev1", 00:17:02.919 "raid_level": "raid5f", 00:17:02.919 "base_bdevs": [ 00:17:02.919 "malloc1", 00:17:02.919 "malloc2", 00:17:02.919 "malloc3", 00:17:02.919 "malloc4" 00:17:02.919 ], 00:17:02.919 "strip_size_kb": 64, 00:17:02.919 "superblock": false, 00:17:02.919 "method": "bdev_raid_create", 00:17:02.919 "req_id": 1 00:17:02.919 } 00:17:02.919 Got JSON-RPC error response 00:17:02.919 response: 00:17:02.919 { 00:17:02.919 "code": -17, 00:17:02.919 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:02.919 } 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.919 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.178 [2024-11-15 10:45:24.114425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:03.178 [2024-11-15 10:45:24.114623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.178 [2024-11-15 10:45:24.114752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:03.178 [2024-11-15 10:45:24.114862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.178 [2024-11-15 10:45:24.117777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.178 [2024-11-15 10:45:24.117830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:03.178 [2024-11-15 10:45:24.117930] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:03.178 [2024-11-15 10:45:24.118008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:03.178 pt1 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.178 "name": "raid_bdev1", 00:17:03.178 "uuid": "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f", 00:17:03.178 "strip_size_kb": 64, 00:17:03.178 "state": "configuring", 00:17:03.178 "raid_level": "raid5f", 00:17:03.178 "superblock": true, 00:17:03.178 "num_base_bdevs": 4, 00:17:03.178 "num_base_bdevs_discovered": 1, 00:17:03.178 "num_base_bdevs_operational": 4, 00:17:03.178 "base_bdevs_list": [ 00:17:03.178 { 00:17:03.178 "name": "pt1", 00:17:03.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.178 "is_configured": true, 00:17:03.178 "data_offset": 2048, 00:17:03.178 "data_size": 63488 00:17:03.178 }, 00:17:03.178 { 00:17:03.178 "name": null, 00:17:03.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.178 "is_configured": false, 00:17:03.178 "data_offset": 2048, 00:17:03.178 "data_size": 63488 00:17:03.178 }, 00:17:03.178 { 00:17:03.178 "name": null, 00:17:03.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:03.178 "is_configured": false, 00:17:03.178 "data_offset": 2048, 00:17:03.178 "data_size": 63488 00:17:03.178 }, 00:17:03.178 { 00:17:03.178 "name": null, 00:17:03.178 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:03.178 "is_configured": false, 00:17:03.178 "data_offset": 2048, 00:17:03.178 "data_size": 63488 00:17:03.178 } 00:17:03.178 ] 00:17:03.178 }' 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.178 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.746 [2024-11-15 10:45:24.642611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:03.746 [2024-11-15 10:45:24.642834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.746 [2024-11-15 10:45:24.643008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:03.746 [2024-11-15 10:45:24.643040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.746 [2024-11-15 10:45:24.643601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.746 [2024-11-15 10:45:24.643644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:03.746 [2024-11-15 10:45:24.643751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:03.746 [2024-11-15 10:45:24.643880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:03.746 pt2 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.746 [2024-11-15 10:45:24.650595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.746 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.746 "name": "raid_bdev1", 00:17:03.746 "uuid": "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f", 00:17:03.746 "strip_size_kb": 64, 00:17:03.746 "state": "configuring", 00:17:03.746 "raid_level": "raid5f", 00:17:03.746 "superblock": true, 00:17:03.746 "num_base_bdevs": 4, 00:17:03.746 "num_base_bdevs_discovered": 1, 00:17:03.746 "num_base_bdevs_operational": 4, 00:17:03.746 "base_bdevs_list": [ 00:17:03.746 { 00:17:03.746 "name": "pt1", 00:17:03.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.746 "is_configured": true, 00:17:03.746 "data_offset": 2048, 00:17:03.746 "data_size": 63488 00:17:03.746 }, 00:17:03.746 { 00:17:03.746 "name": null, 00:17:03.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.746 "is_configured": false, 00:17:03.746 "data_offset": 0, 00:17:03.746 "data_size": 63488 00:17:03.746 }, 00:17:03.746 { 00:17:03.746 "name": null, 00:17:03.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:03.747 "is_configured": false, 00:17:03.747 "data_offset": 2048, 00:17:03.747 "data_size": 63488 00:17:03.747 }, 00:17:03.747 { 00:17:03.747 "name": null, 00:17:03.747 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:03.747 "is_configured": false, 00:17:03.747 "data_offset": 2048, 00:17:03.747 "data_size": 63488 00:17:03.747 } 00:17:03.747 ] 00:17:03.747 }' 00:17:03.747 10:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.747 10:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.005 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:04.005 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:04.005 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:04.005 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.005 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.005 [2024-11-15 10:45:25.154743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:04.005 [2024-11-15 10:45:25.154937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.005 [2024-11-15 10:45:25.155076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:04.005 [2024-11-15 10:45:25.155198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.005 [2024-11-15 10:45:25.155823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.005 [2024-11-15 10:45:25.155970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:04.005 [2024-11-15 10:45:25.156097] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:04.005 [2024-11-15 10:45:25.156130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:04.005 pt2 00:17:04.005 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.005 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:04.005 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:04.005 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:04.005 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.005 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.005 [2024-11-15 10:45:25.162698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:04.005 [2024-11-15 10:45:25.162756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.005 [2024-11-15 10:45:25.162783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:04.005 [2024-11-15 10:45:25.162797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.005 [2024-11-15 10:45:25.163227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.005 [2024-11-15 10:45:25.163259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:04.005 [2024-11-15 10:45:25.163339] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:04.005 [2024-11-15 10:45:25.163365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:04.264 pt3 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.264 [2024-11-15 10:45:25.170676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:04.264 [2024-11-15 10:45:25.170855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.264 [2024-11-15 10:45:25.170994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:04.264 [2024-11-15 10:45:25.171117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.264 [2024-11-15 10:45:25.171632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.264 [2024-11-15 10:45:25.171782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:04.264 [2024-11-15 10:45:25.171966] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:04.264 [2024-11-15 10:45:25.172102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:04.264 [2024-11-15 10:45:25.172402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:04.264 [2024-11-15 10:45:25.172531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:04.264 [2024-11-15 10:45:25.172947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:04.264 [2024-11-15 10:45:25.179338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:04.264 [2024-11-15 10:45:25.179368] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:04.264 [2024-11-15 10:45:25.179595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.264 pt4 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.264 "name": "raid_bdev1", 00:17:04.264 "uuid": "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f", 00:17:04.264 "strip_size_kb": 64, 00:17:04.264 "state": "online", 00:17:04.264 "raid_level": "raid5f", 00:17:04.264 "superblock": true, 00:17:04.264 "num_base_bdevs": 4, 00:17:04.264 "num_base_bdevs_discovered": 4, 00:17:04.264 "num_base_bdevs_operational": 4, 00:17:04.264 "base_bdevs_list": [ 00:17:04.264 { 00:17:04.264 "name": "pt1", 00:17:04.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:04.264 "is_configured": true, 00:17:04.264 "data_offset": 2048, 00:17:04.264 "data_size": 63488 00:17:04.264 }, 00:17:04.264 { 00:17:04.264 "name": "pt2", 00:17:04.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.264 "is_configured": true, 00:17:04.264 "data_offset": 2048, 00:17:04.264 "data_size": 63488 00:17:04.264 }, 00:17:04.264 { 00:17:04.264 "name": "pt3", 00:17:04.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:04.264 "is_configured": true, 00:17:04.264 "data_offset": 2048, 00:17:04.264 "data_size": 63488 00:17:04.264 }, 00:17:04.264 { 00:17:04.264 "name": "pt4", 00:17:04.264 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:04.264 "is_configured": true, 00:17:04.264 "data_offset": 2048, 00:17:04.264 "data_size": 63488 00:17:04.264 } 00:17:04.264 ] 00:17:04.264 }' 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.264 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.832 [2024-11-15 10:45:25.723311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:04.832 "name": "raid_bdev1", 00:17:04.832 "aliases": [ 00:17:04.832 "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f" 00:17:04.832 ], 00:17:04.832 "product_name": "Raid Volume", 00:17:04.832 "block_size": 512, 00:17:04.832 "num_blocks": 190464, 00:17:04.832 "uuid": "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f", 00:17:04.832 "assigned_rate_limits": { 00:17:04.832 "rw_ios_per_sec": 0, 00:17:04.832 "rw_mbytes_per_sec": 0, 00:17:04.832 "r_mbytes_per_sec": 0, 00:17:04.832 "w_mbytes_per_sec": 0 00:17:04.832 }, 00:17:04.832 "claimed": false, 00:17:04.832 "zoned": false, 00:17:04.832 "supported_io_types": { 00:17:04.832 "read": true, 00:17:04.832 "write": true, 00:17:04.832 "unmap": false, 00:17:04.832 "flush": false, 00:17:04.832 "reset": true, 00:17:04.832 "nvme_admin": false, 00:17:04.832 "nvme_io": false, 00:17:04.832 "nvme_io_md": false, 00:17:04.832 "write_zeroes": true, 00:17:04.832 "zcopy": false, 00:17:04.832 "get_zone_info": false, 00:17:04.832 "zone_management": false, 00:17:04.832 "zone_append": false, 00:17:04.832 "compare": false, 00:17:04.832 "compare_and_write": false, 00:17:04.832 "abort": false, 00:17:04.832 "seek_hole": false, 00:17:04.832 "seek_data": false, 00:17:04.832 "copy": false, 00:17:04.832 "nvme_iov_md": false 00:17:04.832 }, 00:17:04.832 "driver_specific": { 00:17:04.832 "raid": { 00:17:04.832 "uuid": "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f", 00:17:04.832 "strip_size_kb": 64, 00:17:04.832 "state": "online", 00:17:04.832 "raid_level": "raid5f", 00:17:04.832 "superblock": true, 00:17:04.832 "num_base_bdevs": 4, 00:17:04.832 "num_base_bdevs_discovered": 4, 00:17:04.832 "num_base_bdevs_operational": 4, 00:17:04.832 "base_bdevs_list": [ 00:17:04.832 { 00:17:04.832 "name": "pt1", 00:17:04.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:04.832 "is_configured": true, 00:17:04.832 "data_offset": 2048, 00:17:04.832 "data_size": 63488 00:17:04.832 }, 00:17:04.832 { 00:17:04.832 "name": "pt2", 00:17:04.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.832 "is_configured": true, 00:17:04.832 "data_offset": 2048, 00:17:04.832 "data_size": 63488 00:17:04.832 }, 00:17:04.832 { 00:17:04.832 "name": "pt3", 00:17:04.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:04.832 "is_configured": true, 00:17:04.832 "data_offset": 2048, 00:17:04.832 "data_size": 63488 00:17:04.832 }, 00:17:04.832 { 00:17:04.832 "name": "pt4", 00:17:04.832 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:04.832 "is_configured": true, 00:17:04.832 "data_offset": 2048, 00:17:04.832 "data_size": 63488 00:17:04.832 } 00:17:04.832 ] 00:17:04.832 } 00:17:04.832 } 00:17:04.832 }' 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:04.832 pt2 00:17:04.832 pt3 00:17:04.832 pt4' 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.832 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.091 10:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:05.091 [2024-11-15 10:45:26.103327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f '!=' ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f ']' 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.091 [2024-11-15 10:45:26.151184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.091 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.092 "name": "raid_bdev1", 00:17:05.092 "uuid": "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f", 00:17:05.092 "strip_size_kb": 64, 00:17:05.092 "state": "online", 00:17:05.092 "raid_level": "raid5f", 00:17:05.092 "superblock": true, 00:17:05.092 "num_base_bdevs": 4, 00:17:05.092 "num_base_bdevs_discovered": 3, 00:17:05.092 "num_base_bdevs_operational": 3, 00:17:05.092 "base_bdevs_list": [ 00:17:05.092 { 00:17:05.092 "name": null, 00:17:05.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.092 "is_configured": false, 00:17:05.092 "data_offset": 0, 00:17:05.092 "data_size": 63488 00:17:05.092 }, 00:17:05.092 { 00:17:05.092 "name": "pt2", 00:17:05.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.092 "is_configured": true, 00:17:05.092 "data_offset": 2048, 00:17:05.092 "data_size": 63488 00:17:05.092 }, 00:17:05.092 { 00:17:05.092 "name": "pt3", 00:17:05.092 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.092 "is_configured": true, 00:17:05.092 "data_offset": 2048, 00:17:05.092 "data_size": 63488 00:17:05.092 }, 00:17:05.092 { 00:17:05.092 "name": "pt4", 00:17:05.092 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:05.092 "is_configured": true, 00:17:05.092 "data_offset": 2048, 00:17:05.092 "data_size": 63488 00:17:05.092 } 00:17:05.092 ] 00:17:05.092 }' 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.092 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.659 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:05.659 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.660 [2024-11-15 10:45:26.647248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.660 [2024-11-15 10:45:26.647401] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.660 [2024-11-15 10:45:26.647538] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.660 [2024-11-15 10:45:26.647642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.660 [2024-11-15 10:45:26.647659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.660 [2024-11-15 10:45:26.731246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:05.660 [2024-11-15 10:45:26.731422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.660 [2024-11-15 10:45:26.731467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:05.660 [2024-11-15 10:45:26.731482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.660 [2024-11-15 10:45:26.734310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.660 pt2 00:17:05.660 [2024-11-15 10:45:26.734463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:05.660 [2024-11-15 10:45:26.734601] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:05.660 [2024-11-15 10:45:26.734663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.660 "name": "raid_bdev1", 00:17:05.660 "uuid": "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f", 00:17:05.660 "strip_size_kb": 64, 00:17:05.660 "state": "configuring", 00:17:05.660 "raid_level": "raid5f", 00:17:05.660 "superblock": true, 00:17:05.660 "num_base_bdevs": 4, 00:17:05.660 "num_base_bdevs_discovered": 1, 00:17:05.660 "num_base_bdevs_operational": 3, 00:17:05.660 "base_bdevs_list": [ 00:17:05.660 { 00:17:05.660 "name": null, 00:17:05.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.660 "is_configured": false, 00:17:05.660 "data_offset": 2048, 00:17:05.660 "data_size": 63488 00:17:05.660 }, 00:17:05.660 { 00:17:05.660 "name": "pt2", 00:17:05.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.660 "is_configured": true, 00:17:05.660 "data_offset": 2048, 00:17:05.660 "data_size": 63488 00:17:05.660 }, 00:17:05.660 { 00:17:05.660 "name": null, 00:17:05.660 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.660 "is_configured": false, 00:17:05.660 "data_offset": 2048, 00:17:05.660 "data_size": 63488 00:17:05.660 }, 00:17:05.660 { 00:17:05.660 "name": null, 00:17:05.660 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:05.660 "is_configured": false, 00:17:05.660 "data_offset": 2048, 00:17:05.660 "data_size": 63488 00:17:05.660 } 00:17:05.660 ] 00:17:05.660 }' 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.660 10:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.226 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:06.226 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:06.226 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:06.226 10:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.226 10:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.226 [2024-11-15 10:45:27.283419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:06.226 [2024-11-15 10:45:27.283628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.226 [2024-11-15 10:45:27.283765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:06.226 [2024-11-15 10:45:27.283881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.226 [2024-11-15 10:45:27.284474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.226 [2024-11-15 10:45:27.284654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:06.226 [2024-11-15 10:45:27.284781] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:06.226 [2024-11-15 10:45:27.284822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:06.226 pt3 00:17:06.226 10:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.226 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:06.226 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.227 "name": "raid_bdev1", 00:17:06.227 "uuid": "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f", 00:17:06.227 "strip_size_kb": 64, 00:17:06.227 "state": "configuring", 00:17:06.227 "raid_level": "raid5f", 00:17:06.227 "superblock": true, 00:17:06.227 "num_base_bdevs": 4, 00:17:06.227 "num_base_bdevs_discovered": 2, 00:17:06.227 "num_base_bdevs_operational": 3, 00:17:06.227 "base_bdevs_list": [ 00:17:06.227 { 00:17:06.227 "name": null, 00:17:06.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.227 "is_configured": false, 00:17:06.227 "data_offset": 2048, 00:17:06.227 "data_size": 63488 00:17:06.227 }, 00:17:06.227 { 00:17:06.227 "name": "pt2", 00:17:06.227 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.227 "is_configured": true, 00:17:06.227 "data_offset": 2048, 00:17:06.227 "data_size": 63488 00:17:06.227 }, 00:17:06.227 { 00:17:06.227 "name": "pt3", 00:17:06.227 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:06.227 "is_configured": true, 00:17:06.227 "data_offset": 2048, 00:17:06.227 "data_size": 63488 00:17:06.227 }, 00:17:06.227 { 00:17:06.227 "name": null, 00:17:06.227 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:06.227 "is_configured": false, 00:17:06.227 "data_offset": 2048, 00:17:06.227 "data_size": 63488 00:17:06.227 } 00:17:06.227 ] 00:17:06.227 }' 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.227 10:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.793 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:06.793 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:06.793 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:06.793 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:06.793 10:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.793 10:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.793 [2024-11-15 10:45:27.795571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:06.793 [2024-11-15 10:45:27.795772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.793 [2024-11-15 10:45:27.795851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:06.793 [2024-11-15 10:45:27.795957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.793 [2024-11-15 10:45:27.796592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.793 [2024-11-15 10:45:27.796741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:06.793 [2024-11-15 10:45:27.796945] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:06.793 [2024-11-15 10:45:27.797074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:06.793 [2024-11-15 10:45:27.797263] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:06.793 [2024-11-15 10:45:27.797280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:06.793 [2024-11-15 10:45:27.797603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:06.793 pt4 00:17:06.793 [2024-11-15 10:45:27.804009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:06.793 [2024-11-15 10:45:27.804041] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:06.793 [2024-11-15 10:45:27.804391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.793 10:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.793 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:06.793 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.793 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.794 "name": "raid_bdev1", 00:17:06.794 "uuid": "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f", 00:17:06.794 "strip_size_kb": 64, 00:17:06.794 "state": "online", 00:17:06.794 "raid_level": "raid5f", 00:17:06.794 "superblock": true, 00:17:06.794 "num_base_bdevs": 4, 00:17:06.794 "num_base_bdevs_discovered": 3, 00:17:06.794 "num_base_bdevs_operational": 3, 00:17:06.794 "base_bdevs_list": [ 00:17:06.794 { 00:17:06.794 "name": null, 00:17:06.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.794 "is_configured": false, 00:17:06.794 "data_offset": 2048, 00:17:06.794 "data_size": 63488 00:17:06.794 }, 00:17:06.794 { 00:17:06.794 "name": "pt2", 00:17:06.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.794 "is_configured": true, 00:17:06.794 "data_offset": 2048, 00:17:06.794 "data_size": 63488 00:17:06.794 }, 00:17:06.794 { 00:17:06.794 "name": "pt3", 00:17:06.794 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:06.794 "is_configured": true, 00:17:06.794 "data_offset": 2048, 00:17:06.794 "data_size": 63488 00:17:06.794 }, 00:17:06.794 { 00:17:06.794 "name": "pt4", 00:17:06.794 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:06.794 "is_configured": true, 00:17:06.794 "data_offset": 2048, 00:17:06.794 "data_size": 63488 00:17:06.794 } 00:17:06.794 ] 00:17:06.794 }' 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.794 10:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.361 [2024-11-15 10:45:28.327812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.361 [2024-11-15 10:45:28.327975] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.361 [2024-11-15 10:45:28.328097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.361 [2024-11-15 10:45:28.328217] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.361 [2024-11-15 10:45:28.328243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.361 [2024-11-15 10:45:28.395808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:07.361 [2024-11-15 10:45:28.396005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.361 [2024-11-15 10:45:28.396047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:07.361 [2024-11-15 10:45:28.396066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.361 [2024-11-15 10:45:28.398927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.361 [2024-11-15 10:45:28.399097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:07.361 [2024-11-15 10:45:28.399224] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:07.361 [2024-11-15 10:45:28.399294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:07.361 [2024-11-15 10:45:28.399455] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:07.361 [2024-11-15 10:45:28.399478] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.361 [2024-11-15 10:45:28.399520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:07.361 [2024-11-15 10:45:28.399595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:07.361 [2024-11-15 10:45:28.399736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:07.361 pt1 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.361 "name": "raid_bdev1", 00:17:07.361 "uuid": "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f", 00:17:07.361 "strip_size_kb": 64, 00:17:07.361 "state": "configuring", 00:17:07.361 "raid_level": "raid5f", 00:17:07.361 "superblock": true, 00:17:07.361 "num_base_bdevs": 4, 00:17:07.361 "num_base_bdevs_discovered": 2, 00:17:07.361 "num_base_bdevs_operational": 3, 00:17:07.361 "base_bdevs_list": [ 00:17:07.361 { 00:17:07.361 "name": null, 00:17:07.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.361 "is_configured": false, 00:17:07.361 "data_offset": 2048, 00:17:07.361 "data_size": 63488 00:17:07.361 }, 00:17:07.361 { 00:17:07.361 "name": "pt2", 00:17:07.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.361 "is_configured": true, 00:17:07.361 "data_offset": 2048, 00:17:07.361 "data_size": 63488 00:17:07.361 }, 00:17:07.361 { 00:17:07.361 "name": "pt3", 00:17:07.361 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.361 "is_configured": true, 00:17:07.361 "data_offset": 2048, 00:17:07.361 "data_size": 63488 00:17:07.361 }, 00:17:07.361 { 00:17:07.361 "name": null, 00:17:07.361 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:07.361 "is_configured": false, 00:17:07.361 "data_offset": 2048, 00:17:07.361 "data_size": 63488 00:17:07.361 } 00:17:07.361 ] 00:17:07.361 }' 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.361 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.928 [2024-11-15 10:45:28.976049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:07.928 [2024-11-15 10:45:28.976250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.928 [2024-11-15 10:45:28.976430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:07.928 [2024-11-15 10:45:28.976456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.928 [2024-11-15 10:45:28.977069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.928 [2024-11-15 10:45:28.977096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:07.928 [2024-11-15 10:45:28.977196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:07.928 [2024-11-15 10:45:28.977235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:07.928 [2024-11-15 10:45:28.977406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:07.928 [2024-11-15 10:45:28.977421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:07.928 [2024-11-15 10:45:28.977743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:07.928 [2024-11-15 10:45:28.984250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:07.928 [2024-11-15 10:45:28.984399] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:07.928 [2024-11-15 10:45:28.984879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.928 pt4 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.928 10:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.928 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.928 10:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.928 "name": "raid_bdev1", 00:17:07.928 "uuid": "ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f", 00:17:07.928 "strip_size_kb": 64, 00:17:07.928 "state": "online", 00:17:07.928 "raid_level": "raid5f", 00:17:07.928 "superblock": true, 00:17:07.928 "num_base_bdevs": 4, 00:17:07.928 "num_base_bdevs_discovered": 3, 00:17:07.928 "num_base_bdevs_operational": 3, 00:17:07.928 "base_bdevs_list": [ 00:17:07.928 { 00:17:07.928 "name": null, 00:17:07.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.928 "is_configured": false, 00:17:07.928 "data_offset": 2048, 00:17:07.928 "data_size": 63488 00:17:07.928 }, 00:17:07.928 { 00:17:07.928 "name": "pt2", 00:17:07.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.928 "is_configured": true, 00:17:07.928 "data_offset": 2048, 00:17:07.928 "data_size": 63488 00:17:07.928 }, 00:17:07.928 { 00:17:07.928 "name": "pt3", 00:17:07.928 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.928 "is_configured": true, 00:17:07.928 "data_offset": 2048, 00:17:07.928 "data_size": 63488 00:17:07.928 }, 00:17:07.928 { 00:17:07.928 "name": "pt4", 00:17:07.928 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:07.928 "is_configured": true, 00:17:07.928 "data_offset": 2048, 00:17:07.928 "data_size": 63488 00:17:07.928 } 00:17:07.928 ] 00:17:07.928 }' 00:17:07.928 10:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.928 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.494 [2024-11-15 10:45:29.556640] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f '!=' ccf05e9b-ba60-44f3-b48d-f74c07f6ee4f ']' 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84474 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84474 ']' 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84474 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84474 00:17:08.494 killing process with pid 84474 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84474' 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84474 00:17:08.494 [2024-11-15 10:45:29.628672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:08.494 [2024-11-15 10:45:29.628781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.494 10:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84474 00:17:08.494 [2024-11-15 10:45:29.628876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.494 [2024-11-15 10:45:29.628895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:09.060 [2024-11-15 10:45:29.975176] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:09.992 ************************************ 00:17:09.992 END TEST raid5f_superblock_test 00:17:09.992 ************************************ 00:17:09.992 10:45:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:09.992 00:17:09.992 real 0m9.304s 00:17:09.992 user 0m15.308s 00:17:09.992 sys 0m1.307s 00:17:09.992 10:45:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.992 10:45:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.992 10:45:31 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:09.992 10:45:31 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:09.992 10:45:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:09.992 10:45:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.992 10:45:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.992 ************************************ 00:17:09.992 START TEST raid5f_rebuild_test 00:17:09.992 ************************************ 00:17:09.992 10:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:17:09.992 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:09.992 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:09.992 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:09.992 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:09.992 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:09.992 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:09.992 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:09.992 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:09.992 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:09.992 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:09.992 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:09.992 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:09.992 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:09.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84961 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84961 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84961 ']' 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.993 10:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.250 [2024-11-15 10:45:31.156918] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:17:10.251 [2024-11-15 10:45:31.157336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:10.251 Zero copy mechanism will not be used. 00:17:10.251 -allocations --file-prefix=spdk_pid84961 ] 00:17:10.251 [2024-11-15 10:45:31.349538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.508 [2024-11-15 10:45:31.509553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.765 [2024-11-15 10:45:31.729450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:10.765 [2024-11-15 10:45:31.729731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.023 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.023 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:11.023 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.023 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:11.023 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.023 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.023 BaseBdev1_malloc 00:17:11.023 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.023 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:11.023 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.023 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.280 [2024-11-15 10:45:32.183144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:11.280 [2024-11-15 10:45:32.183380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.280 [2024-11-15 10:45:32.183462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:11.280 [2024-11-15 10:45:32.183659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.280 [2024-11-15 10:45:32.186706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.280 BaseBdev1 00:17:11.280 [2024-11-15 10:45:32.186896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.280 BaseBdev2_malloc 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.280 [2024-11-15 10:45:32.234192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:11.280 [2024-11-15 10:45:32.234409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.280 [2024-11-15 10:45:32.234512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:11.280 [2024-11-15 10:45:32.234722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.280 [2024-11-15 10:45:32.237632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.280 [2024-11-15 10:45:32.237683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:11.280 BaseBdev2 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.280 BaseBdev3_malloc 00:17:11.280 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.281 [2024-11-15 10:45:32.302416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:11.281 [2024-11-15 10:45:32.302500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.281 [2024-11-15 10:45:32.302535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:11.281 [2024-11-15 10:45:32.302554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.281 [2024-11-15 10:45:32.305338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.281 [2024-11-15 10:45:32.305393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:11.281 BaseBdev3 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.281 BaseBdev4_malloc 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.281 [2024-11-15 10:45:32.351593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:11.281 [2024-11-15 10:45:32.351796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.281 [2024-11-15 10:45:32.351870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:11.281 [2024-11-15 10:45:32.351983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.281 [2024-11-15 10:45:32.354974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.281 BaseBdev4 00:17:11.281 [2024-11-15 10:45:32.355170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.281 spare_malloc 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.281 spare_delay 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.281 [2024-11-15 10:45:32.412514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:11.281 [2024-11-15 10:45:32.412772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.281 [2024-11-15 10:45:32.412849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:11.281 [2024-11-15 10:45:32.412875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.281 [2024-11-15 10:45:32.415796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.281 [2024-11-15 10:45:32.415848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:11.281 spare 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.281 [2024-11-15 10:45:32.420818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.281 [2024-11-15 10:45:32.423570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.281 [2024-11-15 10:45:32.423788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.281 [2024-11-15 10:45:32.423919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:11.281 [2024-11-15 10:45:32.424144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:11.281 [2024-11-15 10:45:32.424241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:11.281 [2024-11-15 10:45:32.424705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:11.281 [2024-11-15 10:45:32.432112] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:11.281 [2024-11-15 10:45:32.432319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:11.281 [2024-11-15 10:45:32.432807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.281 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.537 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.537 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.537 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.537 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.537 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.537 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.537 "name": "raid_bdev1", 00:17:11.537 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:11.537 "strip_size_kb": 64, 00:17:11.537 "state": "online", 00:17:11.537 "raid_level": "raid5f", 00:17:11.537 "superblock": false, 00:17:11.537 "num_base_bdevs": 4, 00:17:11.537 "num_base_bdevs_discovered": 4, 00:17:11.537 "num_base_bdevs_operational": 4, 00:17:11.537 "base_bdevs_list": [ 00:17:11.537 { 00:17:11.537 "name": "BaseBdev1", 00:17:11.537 "uuid": "dbb8b667-4cb3-5148-8628-77c10367752a", 00:17:11.537 "is_configured": true, 00:17:11.537 "data_offset": 0, 00:17:11.537 "data_size": 65536 00:17:11.537 }, 00:17:11.537 { 00:17:11.537 "name": "BaseBdev2", 00:17:11.537 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:11.537 "is_configured": true, 00:17:11.537 "data_offset": 0, 00:17:11.537 "data_size": 65536 00:17:11.537 }, 00:17:11.537 { 00:17:11.537 "name": "BaseBdev3", 00:17:11.537 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:11.537 "is_configured": true, 00:17:11.537 "data_offset": 0, 00:17:11.537 "data_size": 65536 00:17:11.537 }, 00:17:11.537 { 00:17:11.537 "name": "BaseBdev4", 00:17:11.537 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:11.537 "is_configured": true, 00:17:11.537 "data_offset": 0, 00:17:11.537 "data_size": 65536 00:17:11.537 } 00:17:11.537 ] 00:17:11.537 }' 00:17:11.537 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.537 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.100 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:12.100 10:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:12.100 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.100 10:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.100 [2024-11-15 10:45:32.985017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.100 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:12.358 [2024-11-15 10:45:33.320908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:12.358 /dev/nbd0 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.358 1+0 records in 00:17:12.358 1+0 records out 00:17:12.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255854 s, 16.0 MB/s 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:12.358 10:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:12.926 512+0 records in 00:17:12.926 512+0 records out 00:17:12.926 100663296 bytes (101 MB, 96 MiB) copied, 0.619059 s, 163 MB/s 00:17:12.926 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:12.926 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.926 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:12.926 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:12.926 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:12.926 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.926 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.185 [2024-11-15 10:45:34.280739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.185 [2024-11-15 10:45:34.291787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.185 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.186 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.186 10:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.186 10:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.186 10:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.186 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.186 "name": "raid_bdev1", 00:17:13.186 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:13.186 "strip_size_kb": 64, 00:17:13.186 "state": "online", 00:17:13.186 "raid_level": "raid5f", 00:17:13.186 "superblock": false, 00:17:13.186 "num_base_bdevs": 4, 00:17:13.186 "num_base_bdevs_discovered": 3, 00:17:13.186 "num_base_bdevs_operational": 3, 00:17:13.186 "base_bdevs_list": [ 00:17:13.186 { 00:17:13.186 "name": null, 00:17:13.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.186 "is_configured": false, 00:17:13.186 "data_offset": 0, 00:17:13.186 "data_size": 65536 00:17:13.186 }, 00:17:13.186 { 00:17:13.186 "name": "BaseBdev2", 00:17:13.186 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:13.186 "is_configured": true, 00:17:13.186 "data_offset": 0, 00:17:13.186 "data_size": 65536 00:17:13.186 }, 00:17:13.186 { 00:17:13.186 "name": "BaseBdev3", 00:17:13.186 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:13.186 "is_configured": true, 00:17:13.186 "data_offset": 0, 00:17:13.186 "data_size": 65536 00:17:13.186 }, 00:17:13.186 { 00:17:13.186 "name": "BaseBdev4", 00:17:13.186 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:13.186 "is_configured": true, 00:17:13.186 "data_offset": 0, 00:17:13.186 "data_size": 65536 00:17:13.186 } 00:17:13.186 ] 00:17:13.186 }' 00:17:13.186 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.186 10:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.756 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:13.756 10:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.756 10:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.756 [2024-11-15 10:45:34.827876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.756 [2024-11-15 10:45:34.842007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:13.756 10:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.756 10:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:13.756 [2024-11-15 10:45:34.850705] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.693 10:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.693 10:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.693 10:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.693 10:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.693 10:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.693 10:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.693 10:45:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.693 10:45:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.952 10:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.952 10:45:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.952 10:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.952 "name": "raid_bdev1", 00:17:14.952 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:14.952 "strip_size_kb": 64, 00:17:14.952 "state": "online", 00:17:14.952 "raid_level": "raid5f", 00:17:14.952 "superblock": false, 00:17:14.952 "num_base_bdevs": 4, 00:17:14.952 "num_base_bdevs_discovered": 4, 00:17:14.952 "num_base_bdevs_operational": 4, 00:17:14.952 "process": { 00:17:14.952 "type": "rebuild", 00:17:14.952 "target": "spare", 00:17:14.952 "progress": { 00:17:14.952 "blocks": 17280, 00:17:14.952 "percent": 8 00:17:14.952 } 00:17:14.952 }, 00:17:14.952 "base_bdevs_list": [ 00:17:14.952 { 00:17:14.952 "name": "spare", 00:17:14.952 "uuid": "7578b995-a906-5850-b679-e3fa78c42d10", 00:17:14.952 "is_configured": true, 00:17:14.952 "data_offset": 0, 00:17:14.952 "data_size": 65536 00:17:14.952 }, 00:17:14.952 { 00:17:14.952 "name": "BaseBdev2", 00:17:14.952 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:14.952 "is_configured": true, 00:17:14.952 "data_offset": 0, 00:17:14.952 "data_size": 65536 00:17:14.952 }, 00:17:14.952 { 00:17:14.952 "name": "BaseBdev3", 00:17:14.952 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:14.952 "is_configured": true, 00:17:14.952 "data_offset": 0, 00:17:14.952 "data_size": 65536 00:17:14.952 }, 00:17:14.952 { 00:17:14.952 "name": "BaseBdev4", 00:17:14.952 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:14.952 "is_configured": true, 00:17:14.952 "data_offset": 0, 00:17:14.952 "data_size": 65536 00:17:14.952 } 00:17:14.952 ] 00:17:14.952 }' 00:17:14.952 10:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.952 10:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.952 10:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.952 [2024-11-15 10:45:36.011992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.952 [2024-11-15 10:45:36.062017] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:14.952 [2024-11-15 10:45:36.062106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.952 [2024-11-15 10:45:36.062134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.952 [2024-11-15 10:45:36.062150] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.952 10:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.210 10:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.210 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.210 "name": "raid_bdev1", 00:17:15.210 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:15.210 "strip_size_kb": 64, 00:17:15.210 "state": "online", 00:17:15.210 "raid_level": "raid5f", 00:17:15.211 "superblock": false, 00:17:15.211 "num_base_bdevs": 4, 00:17:15.211 "num_base_bdevs_discovered": 3, 00:17:15.211 "num_base_bdevs_operational": 3, 00:17:15.211 "base_bdevs_list": [ 00:17:15.211 { 00:17:15.211 "name": null, 00:17:15.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.211 "is_configured": false, 00:17:15.211 "data_offset": 0, 00:17:15.211 "data_size": 65536 00:17:15.211 }, 00:17:15.211 { 00:17:15.211 "name": "BaseBdev2", 00:17:15.211 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:15.211 "is_configured": true, 00:17:15.211 "data_offset": 0, 00:17:15.211 "data_size": 65536 00:17:15.211 }, 00:17:15.211 { 00:17:15.211 "name": "BaseBdev3", 00:17:15.211 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:15.211 "is_configured": true, 00:17:15.211 "data_offset": 0, 00:17:15.211 "data_size": 65536 00:17:15.211 }, 00:17:15.211 { 00:17:15.211 "name": "BaseBdev4", 00:17:15.211 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:15.211 "is_configured": true, 00:17:15.211 "data_offset": 0, 00:17:15.211 "data_size": 65536 00:17:15.211 } 00:17:15.211 ] 00:17:15.211 }' 00:17:15.211 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.211 10:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.778 "name": "raid_bdev1", 00:17:15.778 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:15.778 "strip_size_kb": 64, 00:17:15.778 "state": "online", 00:17:15.778 "raid_level": "raid5f", 00:17:15.778 "superblock": false, 00:17:15.778 "num_base_bdevs": 4, 00:17:15.778 "num_base_bdevs_discovered": 3, 00:17:15.778 "num_base_bdevs_operational": 3, 00:17:15.778 "base_bdevs_list": [ 00:17:15.778 { 00:17:15.778 "name": null, 00:17:15.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.778 "is_configured": false, 00:17:15.778 "data_offset": 0, 00:17:15.778 "data_size": 65536 00:17:15.778 }, 00:17:15.778 { 00:17:15.778 "name": "BaseBdev2", 00:17:15.778 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:15.778 "is_configured": true, 00:17:15.778 "data_offset": 0, 00:17:15.778 "data_size": 65536 00:17:15.778 }, 00:17:15.778 { 00:17:15.778 "name": "BaseBdev3", 00:17:15.778 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:15.778 "is_configured": true, 00:17:15.778 "data_offset": 0, 00:17:15.778 "data_size": 65536 00:17:15.778 }, 00:17:15.778 { 00:17:15.778 "name": "BaseBdev4", 00:17:15.778 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:15.778 "is_configured": true, 00:17:15.778 "data_offset": 0, 00:17:15.778 "data_size": 65536 00:17:15.778 } 00:17:15.778 ] 00:17:15.778 }' 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.778 [2024-11-15 10:45:36.829473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.778 [2024-11-15 10:45:36.842917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.778 10:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:15.778 [2024-11-15 10:45:36.851771] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.713 10:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.713 10:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.713 10:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.713 10:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.713 10:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.713 10:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.713 10:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.713 10:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.713 10:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.973 10:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.973 10:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.973 "name": "raid_bdev1", 00:17:16.973 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:16.973 "strip_size_kb": 64, 00:17:16.973 "state": "online", 00:17:16.973 "raid_level": "raid5f", 00:17:16.973 "superblock": false, 00:17:16.973 "num_base_bdevs": 4, 00:17:16.973 "num_base_bdevs_discovered": 4, 00:17:16.973 "num_base_bdevs_operational": 4, 00:17:16.973 "process": { 00:17:16.973 "type": "rebuild", 00:17:16.973 "target": "spare", 00:17:16.973 "progress": { 00:17:16.973 "blocks": 17280, 00:17:16.973 "percent": 8 00:17:16.973 } 00:17:16.973 }, 00:17:16.973 "base_bdevs_list": [ 00:17:16.973 { 00:17:16.973 "name": "spare", 00:17:16.973 "uuid": "7578b995-a906-5850-b679-e3fa78c42d10", 00:17:16.973 "is_configured": true, 00:17:16.973 "data_offset": 0, 00:17:16.973 "data_size": 65536 00:17:16.973 }, 00:17:16.973 { 00:17:16.973 "name": "BaseBdev2", 00:17:16.973 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:16.973 "is_configured": true, 00:17:16.973 "data_offset": 0, 00:17:16.973 "data_size": 65536 00:17:16.973 }, 00:17:16.973 { 00:17:16.973 "name": "BaseBdev3", 00:17:16.973 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:16.973 "is_configured": true, 00:17:16.973 "data_offset": 0, 00:17:16.973 "data_size": 65536 00:17:16.973 }, 00:17:16.973 { 00:17:16.973 "name": "BaseBdev4", 00:17:16.973 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:16.973 "is_configured": true, 00:17:16.973 "data_offset": 0, 00:17:16.973 "data_size": 65536 00:17:16.973 } 00:17:16.973 ] 00:17:16.973 }' 00:17:16.973 10:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.973 10:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.973 10:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=667 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.973 "name": "raid_bdev1", 00:17:16.973 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:16.973 "strip_size_kb": 64, 00:17:16.973 "state": "online", 00:17:16.973 "raid_level": "raid5f", 00:17:16.973 "superblock": false, 00:17:16.973 "num_base_bdevs": 4, 00:17:16.973 "num_base_bdevs_discovered": 4, 00:17:16.973 "num_base_bdevs_operational": 4, 00:17:16.973 "process": { 00:17:16.973 "type": "rebuild", 00:17:16.973 "target": "spare", 00:17:16.973 "progress": { 00:17:16.973 "blocks": 21120, 00:17:16.973 "percent": 10 00:17:16.973 } 00:17:16.973 }, 00:17:16.973 "base_bdevs_list": [ 00:17:16.973 { 00:17:16.973 "name": "spare", 00:17:16.973 "uuid": "7578b995-a906-5850-b679-e3fa78c42d10", 00:17:16.973 "is_configured": true, 00:17:16.973 "data_offset": 0, 00:17:16.973 "data_size": 65536 00:17:16.973 }, 00:17:16.973 { 00:17:16.973 "name": "BaseBdev2", 00:17:16.973 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:16.973 "is_configured": true, 00:17:16.973 "data_offset": 0, 00:17:16.973 "data_size": 65536 00:17:16.973 }, 00:17:16.973 { 00:17:16.973 "name": "BaseBdev3", 00:17:16.973 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:16.973 "is_configured": true, 00:17:16.973 "data_offset": 0, 00:17:16.973 "data_size": 65536 00:17:16.973 }, 00:17:16.973 { 00:17:16.973 "name": "BaseBdev4", 00:17:16.973 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:16.973 "is_configured": true, 00:17:16.973 "data_offset": 0, 00:17:16.973 "data_size": 65536 00:17:16.973 } 00:17:16.973 ] 00:17:16.973 }' 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.973 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.232 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.232 10:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.167 "name": "raid_bdev1", 00:17:18.167 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:18.167 "strip_size_kb": 64, 00:17:18.167 "state": "online", 00:17:18.167 "raid_level": "raid5f", 00:17:18.167 "superblock": false, 00:17:18.167 "num_base_bdevs": 4, 00:17:18.167 "num_base_bdevs_discovered": 4, 00:17:18.167 "num_base_bdevs_operational": 4, 00:17:18.167 "process": { 00:17:18.167 "type": "rebuild", 00:17:18.167 "target": "spare", 00:17:18.167 "progress": { 00:17:18.167 "blocks": 44160, 00:17:18.167 "percent": 22 00:17:18.167 } 00:17:18.167 }, 00:17:18.167 "base_bdevs_list": [ 00:17:18.167 { 00:17:18.167 "name": "spare", 00:17:18.167 "uuid": "7578b995-a906-5850-b679-e3fa78c42d10", 00:17:18.167 "is_configured": true, 00:17:18.167 "data_offset": 0, 00:17:18.167 "data_size": 65536 00:17:18.167 }, 00:17:18.167 { 00:17:18.167 "name": "BaseBdev2", 00:17:18.167 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:18.167 "is_configured": true, 00:17:18.167 "data_offset": 0, 00:17:18.167 "data_size": 65536 00:17:18.167 }, 00:17:18.167 { 00:17:18.167 "name": "BaseBdev3", 00:17:18.167 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:18.167 "is_configured": true, 00:17:18.167 "data_offset": 0, 00:17:18.167 "data_size": 65536 00:17:18.167 }, 00:17:18.167 { 00:17:18.167 "name": "BaseBdev4", 00:17:18.167 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:18.167 "is_configured": true, 00:17:18.167 "data_offset": 0, 00:17:18.167 "data_size": 65536 00:17:18.167 } 00:17:18.167 ] 00:17:18.167 }' 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.167 10:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.424 10:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.424 10:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.356 "name": "raid_bdev1", 00:17:19.356 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:19.356 "strip_size_kb": 64, 00:17:19.356 "state": "online", 00:17:19.356 "raid_level": "raid5f", 00:17:19.356 "superblock": false, 00:17:19.356 "num_base_bdevs": 4, 00:17:19.356 "num_base_bdevs_discovered": 4, 00:17:19.356 "num_base_bdevs_operational": 4, 00:17:19.356 "process": { 00:17:19.356 "type": "rebuild", 00:17:19.356 "target": "spare", 00:17:19.356 "progress": { 00:17:19.356 "blocks": 65280, 00:17:19.356 "percent": 33 00:17:19.356 } 00:17:19.356 }, 00:17:19.356 "base_bdevs_list": [ 00:17:19.356 { 00:17:19.356 "name": "spare", 00:17:19.356 "uuid": "7578b995-a906-5850-b679-e3fa78c42d10", 00:17:19.356 "is_configured": true, 00:17:19.356 "data_offset": 0, 00:17:19.356 "data_size": 65536 00:17:19.356 }, 00:17:19.356 { 00:17:19.356 "name": "BaseBdev2", 00:17:19.356 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:19.356 "is_configured": true, 00:17:19.356 "data_offset": 0, 00:17:19.356 "data_size": 65536 00:17:19.356 }, 00:17:19.356 { 00:17:19.356 "name": "BaseBdev3", 00:17:19.356 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:19.356 "is_configured": true, 00:17:19.356 "data_offset": 0, 00:17:19.356 "data_size": 65536 00:17:19.356 }, 00:17:19.356 { 00:17:19.356 "name": "BaseBdev4", 00:17:19.356 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:19.356 "is_configured": true, 00:17:19.356 "data_offset": 0, 00:17:19.356 "data_size": 65536 00:17:19.356 } 00:17:19.356 ] 00:17:19.356 }' 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.356 10:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.731 "name": "raid_bdev1", 00:17:20.731 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:20.731 "strip_size_kb": 64, 00:17:20.731 "state": "online", 00:17:20.731 "raid_level": "raid5f", 00:17:20.731 "superblock": false, 00:17:20.731 "num_base_bdevs": 4, 00:17:20.731 "num_base_bdevs_discovered": 4, 00:17:20.731 "num_base_bdevs_operational": 4, 00:17:20.731 "process": { 00:17:20.731 "type": "rebuild", 00:17:20.731 "target": "spare", 00:17:20.731 "progress": { 00:17:20.731 "blocks": 88320, 00:17:20.731 "percent": 44 00:17:20.731 } 00:17:20.731 }, 00:17:20.731 "base_bdevs_list": [ 00:17:20.731 { 00:17:20.731 "name": "spare", 00:17:20.731 "uuid": "7578b995-a906-5850-b679-e3fa78c42d10", 00:17:20.731 "is_configured": true, 00:17:20.731 "data_offset": 0, 00:17:20.731 "data_size": 65536 00:17:20.731 }, 00:17:20.731 { 00:17:20.731 "name": "BaseBdev2", 00:17:20.731 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:20.731 "is_configured": true, 00:17:20.731 "data_offset": 0, 00:17:20.731 "data_size": 65536 00:17:20.731 }, 00:17:20.731 { 00:17:20.731 "name": "BaseBdev3", 00:17:20.731 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:20.731 "is_configured": true, 00:17:20.731 "data_offset": 0, 00:17:20.731 "data_size": 65536 00:17:20.731 }, 00:17:20.731 { 00:17:20.731 "name": "BaseBdev4", 00:17:20.731 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:20.731 "is_configured": true, 00:17:20.731 "data_offset": 0, 00:17:20.731 "data_size": 65536 00:17:20.731 } 00:17:20.731 ] 00:17:20.731 }' 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.731 10:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.666 10:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.666 10:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.666 10:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.666 10:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.666 10:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.666 10:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.666 10:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.666 10:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.666 10:45:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.667 10:45:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.667 10:45:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.667 10:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.667 "name": "raid_bdev1", 00:17:21.667 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:21.667 "strip_size_kb": 64, 00:17:21.667 "state": "online", 00:17:21.667 "raid_level": "raid5f", 00:17:21.667 "superblock": false, 00:17:21.667 "num_base_bdevs": 4, 00:17:21.667 "num_base_bdevs_discovered": 4, 00:17:21.667 "num_base_bdevs_operational": 4, 00:17:21.667 "process": { 00:17:21.667 "type": "rebuild", 00:17:21.667 "target": "spare", 00:17:21.667 "progress": { 00:17:21.667 "blocks": 109440, 00:17:21.667 "percent": 55 00:17:21.667 } 00:17:21.667 }, 00:17:21.667 "base_bdevs_list": [ 00:17:21.667 { 00:17:21.667 "name": "spare", 00:17:21.667 "uuid": "7578b995-a906-5850-b679-e3fa78c42d10", 00:17:21.667 "is_configured": true, 00:17:21.667 "data_offset": 0, 00:17:21.667 "data_size": 65536 00:17:21.667 }, 00:17:21.667 { 00:17:21.667 "name": "BaseBdev2", 00:17:21.667 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:21.667 "is_configured": true, 00:17:21.667 "data_offset": 0, 00:17:21.667 "data_size": 65536 00:17:21.667 }, 00:17:21.667 { 00:17:21.667 "name": "BaseBdev3", 00:17:21.667 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:21.667 "is_configured": true, 00:17:21.667 "data_offset": 0, 00:17:21.667 "data_size": 65536 00:17:21.667 }, 00:17:21.667 { 00:17:21.667 "name": "BaseBdev4", 00:17:21.667 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:21.667 "is_configured": true, 00:17:21.667 "data_offset": 0, 00:17:21.667 "data_size": 65536 00:17:21.667 } 00:17:21.667 ] 00:17:21.667 }' 00:17:21.667 10:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.667 10:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.667 10:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.926 10:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.926 10:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.862 "name": "raid_bdev1", 00:17:22.862 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:22.862 "strip_size_kb": 64, 00:17:22.862 "state": "online", 00:17:22.862 "raid_level": "raid5f", 00:17:22.862 "superblock": false, 00:17:22.862 "num_base_bdevs": 4, 00:17:22.862 "num_base_bdevs_discovered": 4, 00:17:22.862 "num_base_bdevs_operational": 4, 00:17:22.862 "process": { 00:17:22.862 "type": "rebuild", 00:17:22.862 "target": "spare", 00:17:22.862 "progress": { 00:17:22.862 "blocks": 132480, 00:17:22.862 "percent": 67 00:17:22.862 } 00:17:22.862 }, 00:17:22.862 "base_bdevs_list": [ 00:17:22.862 { 00:17:22.862 "name": "spare", 00:17:22.862 "uuid": "7578b995-a906-5850-b679-e3fa78c42d10", 00:17:22.862 "is_configured": true, 00:17:22.862 "data_offset": 0, 00:17:22.862 "data_size": 65536 00:17:22.862 }, 00:17:22.862 { 00:17:22.862 "name": "BaseBdev2", 00:17:22.862 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:22.862 "is_configured": true, 00:17:22.862 "data_offset": 0, 00:17:22.862 "data_size": 65536 00:17:22.862 }, 00:17:22.862 { 00:17:22.862 "name": "BaseBdev3", 00:17:22.862 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:22.862 "is_configured": true, 00:17:22.862 "data_offset": 0, 00:17:22.862 "data_size": 65536 00:17:22.862 }, 00:17:22.862 { 00:17:22.862 "name": "BaseBdev4", 00:17:22.862 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:22.862 "is_configured": true, 00:17:22.862 "data_offset": 0, 00:17:22.862 "data_size": 65536 00:17:22.862 } 00:17:22.862 ] 00:17:22.862 }' 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.862 10:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.863 10:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.863 10:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.238 10:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.238 10:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.238 10:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.238 10:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.238 10:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.238 10:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.238 10:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.238 10:45:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.238 10:45:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.238 10:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.238 10:45:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.238 10:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.238 "name": "raid_bdev1", 00:17:24.238 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:24.238 "strip_size_kb": 64, 00:17:24.238 "state": "online", 00:17:24.238 "raid_level": "raid5f", 00:17:24.238 "superblock": false, 00:17:24.238 "num_base_bdevs": 4, 00:17:24.238 "num_base_bdevs_discovered": 4, 00:17:24.238 "num_base_bdevs_operational": 4, 00:17:24.238 "process": { 00:17:24.238 "type": "rebuild", 00:17:24.238 "target": "spare", 00:17:24.238 "progress": { 00:17:24.238 "blocks": 153600, 00:17:24.238 "percent": 78 00:17:24.238 } 00:17:24.238 }, 00:17:24.238 "base_bdevs_list": [ 00:17:24.238 { 00:17:24.238 "name": "spare", 00:17:24.238 "uuid": "7578b995-a906-5850-b679-e3fa78c42d10", 00:17:24.238 "is_configured": true, 00:17:24.238 "data_offset": 0, 00:17:24.238 "data_size": 65536 00:17:24.238 }, 00:17:24.238 { 00:17:24.238 "name": "BaseBdev2", 00:17:24.238 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:24.238 "is_configured": true, 00:17:24.238 "data_offset": 0, 00:17:24.238 "data_size": 65536 00:17:24.238 }, 00:17:24.238 { 00:17:24.238 "name": "BaseBdev3", 00:17:24.238 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:24.238 "is_configured": true, 00:17:24.238 "data_offset": 0, 00:17:24.238 "data_size": 65536 00:17:24.238 }, 00:17:24.238 { 00:17:24.238 "name": "BaseBdev4", 00:17:24.238 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:24.238 "is_configured": true, 00:17:24.238 "data_offset": 0, 00:17:24.238 "data_size": 65536 00:17:24.238 } 00:17:24.239 ] 00:17:24.239 }' 00:17:24.239 10:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.239 10:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.239 10:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.239 10:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.239 10:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:25.180 10:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:25.180 10:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.180 10:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.180 10:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.180 10:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.180 10:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.180 10:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.180 10:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.180 10:45:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.180 10:45:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.180 10:45:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.180 10:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.180 "name": "raid_bdev1", 00:17:25.180 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:25.180 "strip_size_kb": 64, 00:17:25.180 "state": "online", 00:17:25.180 "raid_level": "raid5f", 00:17:25.180 "superblock": false, 00:17:25.180 "num_base_bdevs": 4, 00:17:25.180 "num_base_bdevs_discovered": 4, 00:17:25.180 "num_base_bdevs_operational": 4, 00:17:25.180 "process": { 00:17:25.180 "type": "rebuild", 00:17:25.180 "target": "spare", 00:17:25.180 "progress": { 00:17:25.180 "blocks": 176640, 00:17:25.180 "percent": 89 00:17:25.180 } 00:17:25.180 }, 00:17:25.180 "base_bdevs_list": [ 00:17:25.180 { 00:17:25.180 "name": "spare", 00:17:25.180 "uuid": "7578b995-a906-5850-b679-e3fa78c42d10", 00:17:25.180 "is_configured": true, 00:17:25.180 "data_offset": 0, 00:17:25.180 "data_size": 65536 00:17:25.180 }, 00:17:25.180 { 00:17:25.180 "name": "BaseBdev2", 00:17:25.180 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:25.180 "is_configured": true, 00:17:25.181 "data_offset": 0, 00:17:25.181 "data_size": 65536 00:17:25.181 }, 00:17:25.182 { 00:17:25.182 "name": "BaseBdev3", 00:17:25.182 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:25.182 "is_configured": true, 00:17:25.182 "data_offset": 0, 00:17:25.182 "data_size": 65536 00:17:25.182 }, 00:17:25.182 { 00:17:25.182 "name": "BaseBdev4", 00:17:25.182 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:25.182 "is_configured": true, 00:17:25.182 "data_offset": 0, 00:17:25.182 "data_size": 65536 00:17:25.182 } 00:17:25.182 ] 00:17:25.182 }' 00:17:25.182 10:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.182 10:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:25.182 10:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.441 10:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.441 10:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:26.374 [2024-11-15 10:45:47.261568] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:26.374 [2024-11-15 10:45:47.261748] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:26.374 [2024-11-15 10:45:47.261848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.374 "name": "raid_bdev1", 00:17:26.374 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:26.374 "strip_size_kb": 64, 00:17:26.374 "state": "online", 00:17:26.374 "raid_level": "raid5f", 00:17:26.374 "superblock": false, 00:17:26.374 "num_base_bdevs": 4, 00:17:26.374 "num_base_bdevs_discovered": 4, 00:17:26.374 "num_base_bdevs_operational": 4, 00:17:26.374 "base_bdevs_list": [ 00:17:26.374 { 00:17:26.374 "name": "spare", 00:17:26.374 "uuid": "7578b995-a906-5850-b679-e3fa78c42d10", 00:17:26.374 "is_configured": true, 00:17:26.374 "data_offset": 0, 00:17:26.374 "data_size": 65536 00:17:26.374 }, 00:17:26.374 { 00:17:26.374 "name": "BaseBdev2", 00:17:26.374 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:26.374 "is_configured": true, 00:17:26.374 "data_offset": 0, 00:17:26.374 "data_size": 65536 00:17:26.374 }, 00:17:26.374 { 00:17:26.374 "name": "BaseBdev3", 00:17:26.374 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:26.374 "is_configured": true, 00:17:26.374 "data_offset": 0, 00:17:26.374 "data_size": 65536 00:17:26.374 }, 00:17:26.374 { 00:17:26.374 "name": "BaseBdev4", 00:17:26.374 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:26.374 "is_configured": true, 00:17:26.374 "data_offset": 0, 00:17:26.374 "data_size": 65536 00:17:26.374 } 00:17:26.374 ] 00:17:26.374 }' 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.374 10:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.633 "name": "raid_bdev1", 00:17:26.633 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:26.633 "strip_size_kb": 64, 00:17:26.633 "state": "online", 00:17:26.633 "raid_level": "raid5f", 00:17:26.633 "superblock": false, 00:17:26.633 "num_base_bdevs": 4, 00:17:26.633 "num_base_bdevs_discovered": 4, 00:17:26.633 "num_base_bdevs_operational": 4, 00:17:26.633 "base_bdevs_list": [ 00:17:26.633 { 00:17:26.633 "name": "spare", 00:17:26.633 "uuid": "7578b995-a906-5850-b679-e3fa78c42d10", 00:17:26.633 "is_configured": true, 00:17:26.633 "data_offset": 0, 00:17:26.633 "data_size": 65536 00:17:26.633 }, 00:17:26.633 { 00:17:26.633 "name": "BaseBdev2", 00:17:26.633 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:26.633 "is_configured": true, 00:17:26.633 "data_offset": 0, 00:17:26.633 "data_size": 65536 00:17:26.633 }, 00:17:26.633 { 00:17:26.633 "name": "BaseBdev3", 00:17:26.633 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:26.633 "is_configured": true, 00:17:26.633 "data_offset": 0, 00:17:26.633 "data_size": 65536 00:17:26.633 }, 00:17:26.633 { 00:17:26.633 "name": "BaseBdev4", 00:17:26.633 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:26.633 "is_configured": true, 00:17:26.633 "data_offset": 0, 00:17:26.633 "data_size": 65536 00:17:26.633 } 00:17:26.633 ] 00:17:26.633 }' 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.633 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.633 "name": "raid_bdev1", 00:17:26.633 "uuid": "6936f49b-8086-4580-b55d-d7b8fd4a0698", 00:17:26.633 "strip_size_kb": 64, 00:17:26.633 "state": "online", 00:17:26.633 "raid_level": "raid5f", 00:17:26.633 "superblock": false, 00:17:26.633 "num_base_bdevs": 4, 00:17:26.633 "num_base_bdevs_discovered": 4, 00:17:26.633 "num_base_bdevs_operational": 4, 00:17:26.633 "base_bdevs_list": [ 00:17:26.633 { 00:17:26.633 "name": "spare", 00:17:26.633 "uuid": "7578b995-a906-5850-b679-e3fa78c42d10", 00:17:26.633 "is_configured": true, 00:17:26.633 "data_offset": 0, 00:17:26.633 "data_size": 65536 00:17:26.633 }, 00:17:26.633 { 00:17:26.633 "name": "BaseBdev2", 00:17:26.633 "uuid": "a22a4cc0-2447-5d5f-b558-99d929b60cdd", 00:17:26.633 "is_configured": true, 00:17:26.633 "data_offset": 0, 00:17:26.633 "data_size": 65536 00:17:26.633 }, 00:17:26.633 { 00:17:26.634 "name": "BaseBdev3", 00:17:26.634 "uuid": "801ffb19-95dc-5977-a1cc-3fb18e07c1be", 00:17:26.634 "is_configured": true, 00:17:26.634 "data_offset": 0, 00:17:26.634 "data_size": 65536 00:17:26.634 }, 00:17:26.634 { 00:17:26.634 "name": "BaseBdev4", 00:17:26.634 "uuid": "eb8de7b0-9960-5d2d-a3a8-d3d2555060de", 00:17:26.634 "is_configured": true, 00:17:26.634 "data_offset": 0, 00:17:26.634 "data_size": 65536 00:17:26.634 } 00:17:26.634 ] 00:17:26.634 }' 00:17:26.634 10:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.634 10:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.199 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:27.199 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.200 [2024-11-15 10:45:48.200432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.200 [2024-11-15 10:45:48.200481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.200 [2024-11-15 10:45:48.200628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.200 [2024-11-15 10:45:48.200784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.200 [2024-11-15 10:45:48.200803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:27.200 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:27.458 /dev/nbd0 00:17:27.458 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:27.458 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:27.458 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:27.458 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:27.458 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:27.458 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:27.458 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:27.458 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:27.458 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:27.458 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:27.458 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.458 1+0 records in 00:17:27.458 1+0 records out 00:17:27.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408125 s, 10.0 MB/s 00:17:27.458 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.717 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:27.717 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.717 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:27.717 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:27.717 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.717 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:27.717 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:27.975 /dev/nbd1 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.975 1+0 records in 00:17:27.975 1+0 records out 00:17:27.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373818 s, 11.0 MB/s 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:27.975 10:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:27.975 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:27.975 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:27.975 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:27.975 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:27.975 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:27.975 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:27.975 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:28.544 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:28.544 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:28.544 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:28.544 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:28.544 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:28.544 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:28.544 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:28.544 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:28.544 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:28.544 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84961 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84961 ']' 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84961 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84961 00:17:28.802 killing process with pid 84961 00:17:28.802 Received shutdown signal, test time was about 60.000000 seconds 00:17:28.802 00:17:28.802 Latency(us) 00:17:28.802 [2024-11-15T10:45:49.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.802 [2024-11-15T10:45:49.964Z] =================================================================================================================== 00:17:28.802 [2024-11-15T10:45:49.964Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84961' 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84961 00:17:28.802 [2024-11-15 10:45:49.767748] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.802 10:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84961 00:17:29.061 [2024-11-15 10:45:50.211700] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:30.438 00:17:30.438 real 0m20.204s 00:17:30.438 user 0m25.258s 00:17:30.438 sys 0m2.255s 00:17:30.438 ************************************ 00:17:30.438 END TEST raid5f_rebuild_test 00:17:30.438 ************************************ 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.438 10:45:51 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:30.438 10:45:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:30.438 10:45:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.438 10:45:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.438 ************************************ 00:17:30.438 START TEST raid5f_rebuild_test_sb 00:17:30.438 ************************************ 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85470 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85470 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85470 ']' 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.438 10:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.438 [2024-11-15 10:45:51.399543] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:17:30.438 [2024-11-15 10:45:51.399854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:30.438 Zero copy mechanism will not be used. 00:17:30.438 -allocations --file-prefix=spdk_pid85470 ] 00:17:30.438 [2024-11-15 10:45:51.575864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.697 [2024-11-15 10:45:51.709664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.000 [2024-11-15 10:45:51.917671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.000 [2024-11-15 10:45:51.917745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.567 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.567 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:31.567 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:31.567 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:31.567 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.567 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.567 BaseBdev1_malloc 00:17:31.567 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.567 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:31.567 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.567 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.567 [2024-11-15 10:45:52.558826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:31.568 [2024-11-15 10:45:52.558916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.568 [2024-11-15 10:45:52.558967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:31.568 [2024-11-15 10:45:52.558988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.568 [2024-11-15 10:45:52.561936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.568 [2024-11-15 10:45:52.562016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:31.568 BaseBdev1 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.568 BaseBdev2_malloc 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.568 [2024-11-15 10:45:52.615340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:31.568 [2024-11-15 10:45:52.615415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.568 [2024-11-15 10:45:52.615452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:31.568 [2024-11-15 10:45:52.615472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.568 [2024-11-15 10:45:52.618209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.568 [2024-11-15 10:45:52.618258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:31.568 BaseBdev2 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.568 BaseBdev3_malloc 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.568 [2024-11-15 10:45:52.671102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:31.568 [2024-11-15 10:45:52.671327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.568 [2024-11-15 10:45:52.671372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:31.568 [2024-11-15 10:45:52.671393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.568 [2024-11-15 10:45:52.674180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.568 [2024-11-15 10:45:52.674234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:31.568 BaseBdev3 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.568 BaseBdev4_malloc 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.568 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.827 [2024-11-15 10:45:52.727371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:31.827 [2024-11-15 10:45:52.727454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.827 [2024-11-15 10:45:52.727484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:31.827 [2024-11-15 10:45:52.727518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.827 [2024-11-15 10:45:52.730362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.827 [2024-11-15 10:45:52.730429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:31.827 BaseBdev4 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.827 spare_malloc 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.827 spare_delay 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.827 [2024-11-15 10:45:52.791247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:31.827 [2024-11-15 10:45:52.791331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.827 [2024-11-15 10:45:52.791359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:31.827 [2024-11-15 10:45:52.791376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.827 [2024-11-15 10:45:52.794247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.827 [2024-11-15 10:45:52.794315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:31.827 spare 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.827 [2024-11-15 10:45:52.803330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:31.827 [2024-11-15 10:45:52.805803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.827 [2024-11-15 10:45:52.805918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:31.827 [2024-11-15 10:45:52.805996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:31.827 [2024-11-15 10:45:52.806233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:31.827 [2024-11-15 10:45:52.806257] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:31.827 [2024-11-15 10:45:52.806597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:31.827 [2024-11-15 10:45:52.813316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:31.827 [2024-11-15 10:45:52.813341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:31.827 [2024-11-15 10:45:52.813618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.827 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.828 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.828 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.828 "name": "raid_bdev1", 00:17:31.828 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:31.828 "strip_size_kb": 64, 00:17:31.828 "state": "online", 00:17:31.828 "raid_level": "raid5f", 00:17:31.828 "superblock": true, 00:17:31.828 "num_base_bdevs": 4, 00:17:31.828 "num_base_bdevs_discovered": 4, 00:17:31.828 "num_base_bdevs_operational": 4, 00:17:31.828 "base_bdevs_list": [ 00:17:31.828 { 00:17:31.828 "name": "BaseBdev1", 00:17:31.828 "uuid": "f1372c23-b322-5a6f-8ff5-0bdc2fe15090", 00:17:31.828 "is_configured": true, 00:17:31.828 "data_offset": 2048, 00:17:31.828 "data_size": 63488 00:17:31.828 }, 00:17:31.828 { 00:17:31.828 "name": "BaseBdev2", 00:17:31.828 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:31.828 "is_configured": true, 00:17:31.828 "data_offset": 2048, 00:17:31.828 "data_size": 63488 00:17:31.828 }, 00:17:31.828 { 00:17:31.828 "name": "BaseBdev3", 00:17:31.828 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:31.828 "is_configured": true, 00:17:31.828 "data_offset": 2048, 00:17:31.828 "data_size": 63488 00:17:31.828 }, 00:17:31.828 { 00:17:31.828 "name": "BaseBdev4", 00:17:31.828 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:31.828 "is_configured": true, 00:17:31.828 "data_offset": 2048, 00:17:31.828 "data_size": 63488 00:17:31.828 } 00:17:31.828 ] 00:17:31.828 }' 00:17:31.828 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.828 10:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.395 [2024-11-15 10:45:53.349449] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:32.395 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:32.653 [2024-11-15 10:45:53.729340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:32.653 /dev/nbd0 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.653 1+0 records in 00:17:32.653 1+0 records out 00:17:32.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313325 s, 13.1 MB/s 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:32.653 10:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:33.589 496+0 records in 00:17:33.589 496+0 records out 00:17:33.589 97517568 bytes (98 MB, 93 MiB) copied, 0.583158 s, 167 MB/s 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:33.589 [2024-11-15 10:45:54.650940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.589 [2024-11-15 10:45:54.658469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.589 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.589 "name": "raid_bdev1", 00:17:33.589 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:33.589 "strip_size_kb": 64, 00:17:33.589 "state": "online", 00:17:33.589 "raid_level": "raid5f", 00:17:33.589 "superblock": true, 00:17:33.589 "num_base_bdevs": 4, 00:17:33.589 "num_base_bdevs_discovered": 3, 00:17:33.589 "num_base_bdevs_operational": 3, 00:17:33.589 "base_bdevs_list": [ 00:17:33.589 { 00:17:33.589 "name": null, 00:17:33.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.589 "is_configured": false, 00:17:33.589 "data_offset": 0, 00:17:33.589 "data_size": 63488 00:17:33.589 }, 00:17:33.589 { 00:17:33.590 "name": "BaseBdev2", 00:17:33.590 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:33.590 "is_configured": true, 00:17:33.590 "data_offset": 2048, 00:17:33.590 "data_size": 63488 00:17:33.590 }, 00:17:33.590 { 00:17:33.590 "name": "BaseBdev3", 00:17:33.590 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:33.590 "is_configured": true, 00:17:33.590 "data_offset": 2048, 00:17:33.590 "data_size": 63488 00:17:33.590 }, 00:17:33.590 { 00:17:33.590 "name": "BaseBdev4", 00:17:33.590 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:33.590 "is_configured": true, 00:17:33.590 "data_offset": 2048, 00:17:33.590 "data_size": 63488 00:17:33.590 } 00:17:33.590 ] 00:17:33.590 }' 00:17:33.590 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.590 10:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.156 10:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:34.156 10:45:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.156 10:45:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.156 [2024-11-15 10:45:55.178636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.156 [2024-11-15 10:45:55.193096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:34.156 10:45:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.156 10:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:34.156 [2024-11-15 10:45:55.202195] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:35.093 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.093 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.093 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.093 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.093 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.093 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.093 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.093 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.093 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.093 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.351 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.351 "name": "raid_bdev1", 00:17:35.351 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:35.351 "strip_size_kb": 64, 00:17:35.351 "state": "online", 00:17:35.351 "raid_level": "raid5f", 00:17:35.351 "superblock": true, 00:17:35.351 "num_base_bdevs": 4, 00:17:35.351 "num_base_bdevs_discovered": 4, 00:17:35.351 "num_base_bdevs_operational": 4, 00:17:35.351 "process": { 00:17:35.351 "type": "rebuild", 00:17:35.351 "target": "spare", 00:17:35.351 "progress": { 00:17:35.351 "blocks": 17280, 00:17:35.351 "percent": 9 00:17:35.351 } 00:17:35.351 }, 00:17:35.351 "base_bdevs_list": [ 00:17:35.351 { 00:17:35.351 "name": "spare", 00:17:35.351 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:35.351 "is_configured": true, 00:17:35.351 "data_offset": 2048, 00:17:35.351 "data_size": 63488 00:17:35.351 }, 00:17:35.352 { 00:17:35.352 "name": "BaseBdev2", 00:17:35.352 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:35.352 "is_configured": true, 00:17:35.352 "data_offset": 2048, 00:17:35.352 "data_size": 63488 00:17:35.352 }, 00:17:35.352 { 00:17:35.352 "name": "BaseBdev3", 00:17:35.352 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:35.352 "is_configured": true, 00:17:35.352 "data_offset": 2048, 00:17:35.352 "data_size": 63488 00:17:35.352 }, 00:17:35.352 { 00:17:35.352 "name": "BaseBdev4", 00:17:35.352 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:35.352 "is_configured": true, 00:17:35.352 "data_offset": 2048, 00:17:35.352 "data_size": 63488 00:17:35.352 } 00:17:35.352 ] 00:17:35.352 }' 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.352 [2024-11-15 10:45:56.355675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.352 [2024-11-15 10:45:56.412792] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:35.352 [2024-11-15 10:45:56.412879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.352 [2024-11-15 10:45:56.412906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.352 [2024-11-15 10:45:56.412920] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.352 "name": "raid_bdev1", 00:17:35.352 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:35.352 "strip_size_kb": 64, 00:17:35.352 "state": "online", 00:17:35.352 "raid_level": "raid5f", 00:17:35.352 "superblock": true, 00:17:35.352 "num_base_bdevs": 4, 00:17:35.352 "num_base_bdevs_discovered": 3, 00:17:35.352 "num_base_bdevs_operational": 3, 00:17:35.352 "base_bdevs_list": [ 00:17:35.352 { 00:17:35.352 "name": null, 00:17:35.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.352 "is_configured": false, 00:17:35.352 "data_offset": 0, 00:17:35.352 "data_size": 63488 00:17:35.352 }, 00:17:35.352 { 00:17:35.352 "name": "BaseBdev2", 00:17:35.352 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:35.352 "is_configured": true, 00:17:35.352 "data_offset": 2048, 00:17:35.352 "data_size": 63488 00:17:35.352 }, 00:17:35.352 { 00:17:35.352 "name": "BaseBdev3", 00:17:35.352 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:35.352 "is_configured": true, 00:17:35.352 "data_offset": 2048, 00:17:35.352 "data_size": 63488 00:17:35.352 }, 00:17:35.352 { 00:17:35.352 "name": "BaseBdev4", 00:17:35.352 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:35.352 "is_configured": true, 00:17:35.352 "data_offset": 2048, 00:17:35.352 "data_size": 63488 00:17:35.352 } 00:17:35.352 ] 00:17:35.352 }' 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.352 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.919 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:35.919 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.919 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:35.919 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:35.919 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.919 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.919 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.919 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.919 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.919 10:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.919 10:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.919 "name": "raid_bdev1", 00:17:35.919 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:35.919 "strip_size_kb": 64, 00:17:35.919 "state": "online", 00:17:35.919 "raid_level": "raid5f", 00:17:35.919 "superblock": true, 00:17:35.919 "num_base_bdevs": 4, 00:17:35.919 "num_base_bdevs_discovered": 3, 00:17:35.919 "num_base_bdevs_operational": 3, 00:17:35.919 "base_bdevs_list": [ 00:17:35.919 { 00:17:35.919 "name": null, 00:17:35.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.919 "is_configured": false, 00:17:35.919 "data_offset": 0, 00:17:35.919 "data_size": 63488 00:17:35.919 }, 00:17:35.919 { 00:17:35.919 "name": "BaseBdev2", 00:17:35.919 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:35.919 "is_configured": true, 00:17:35.919 "data_offset": 2048, 00:17:35.919 "data_size": 63488 00:17:35.919 }, 00:17:35.919 { 00:17:35.919 "name": "BaseBdev3", 00:17:35.919 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:35.919 "is_configured": true, 00:17:35.919 "data_offset": 2048, 00:17:35.919 "data_size": 63488 00:17:35.919 }, 00:17:35.919 { 00:17:35.919 "name": "BaseBdev4", 00:17:35.919 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:35.919 "is_configured": true, 00:17:35.919 "data_offset": 2048, 00:17:35.919 "data_size": 63488 00:17:35.919 } 00:17:35.919 ] 00:17:35.919 }' 00:17:35.919 10:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.919 10:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:35.919 10:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.177 10:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:36.177 10:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:36.177 10:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.177 10:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.178 [2024-11-15 10:45:57.112608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.178 [2024-11-15 10:45:57.126011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:36.178 10:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.178 10:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:36.178 [2024-11-15 10:45:57.134722] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.112 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.112 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.112 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.112 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.112 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.112 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.112 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.112 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.112 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.112 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.112 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.112 "name": "raid_bdev1", 00:17:37.112 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:37.112 "strip_size_kb": 64, 00:17:37.112 "state": "online", 00:17:37.112 "raid_level": "raid5f", 00:17:37.112 "superblock": true, 00:17:37.112 "num_base_bdevs": 4, 00:17:37.112 "num_base_bdevs_discovered": 4, 00:17:37.112 "num_base_bdevs_operational": 4, 00:17:37.112 "process": { 00:17:37.112 "type": "rebuild", 00:17:37.112 "target": "spare", 00:17:37.112 "progress": { 00:17:37.112 "blocks": 17280, 00:17:37.112 "percent": 9 00:17:37.112 } 00:17:37.112 }, 00:17:37.112 "base_bdevs_list": [ 00:17:37.112 { 00:17:37.112 "name": "spare", 00:17:37.112 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:37.112 "is_configured": true, 00:17:37.112 "data_offset": 2048, 00:17:37.112 "data_size": 63488 00:17:37.112 }, 00:17:37.112 { 00:17:37.112 "name": "BaseBdev2", 00:17:37.112 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:37.112 "is_configured": true, 00:17:37.112 "data_offset": 2048, 00:17:37.112 "data_size": 63488 00:17:37.112 }, 00:17:37.112 { 00:17:37.112 "name": "BaseBdev3", 00:17:37.112 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:37.112 "is_configured": true, 00:17:37.112 "data_offset": 2048, 00:17:37.112 "data_size": 63488 00:17:37.112 }, 00:17:37.112 { 00:17:37.112 "name": "BaseBdev4", 00:17:37.112 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:37.112 "is_configured": true, 00:17:37.112 "data_offset": 2048, 00:17:37.112 "data_size": 63488 00:17:37.112 } 00:17:37.112 ] 00:17:37.112 }' 00:17:37.112 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.112 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.112 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:37.371 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=687 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.371 "name": "raid_bdev1", 00:17:37.371 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:37.371 "strip_size_kb": 64, 00:17:37.371 "state": "online", 00:17:37.371 "raid_level": "raid5f", 00:17:37.371 "superblock": true, 00:17:37.371 "num_base_bdevs": 4, 00:17:37.371 "num_base_bdevs_discovered": 4, 00:17:37.371 "num_base_bdevs_operational": 4, 00:17:37.371 "process": { 00:17:37.371 "type": "rebuild", 00:17:37.371 "target": "spare", 00:17:37.371 "progress": { 00:17:37.371 "blocks": 21120, 00:17:37.371 "percent": 11 00:17:37.371 } 00:17:37.371 }, 00:17:37.371 "base_bdevs_list": [ 00:17:37.371 { 00:17:37.371 "name": "spare", 00:17:37.371 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:37.371 "is_configured": true, 00:17:37.371 "data_offset": 2048, 00:17:37.371 "data_size": 63488 00:17:37.371 }, 00:17:37.371 { 00:17:37.371 "name": "BaseBdev2", 00:17:37.371 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:37.371 "is_configured": true, 00:17:37.371 "data_offset": 2048, 00:17:37.371 "data_size": 63488 00:17:37.371 }, 00:17:37.371 { 00:17:37.371 "name": "BaseBdev3", 00:17:37.371 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:37.371 "is_configured": true, 00:17:37.371 "data_offset": 2048, 00:17:37.371 "data_size": 63488 00:17:37.371 }, 00:17:37.371 { 00:17:37.371 "name": "BaseBdev4", 00:17:37.371 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:37.371 "is_configured": true, 00:17:37.371 "data_offset": 2048, 00:17:37.371 "data_size": 63488 00:17:37.371 } 00:17:37.371 ] 00:17:37.371 }' 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.371 10:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:38.305 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.305 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.305 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.305 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.305 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.305 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.305 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.305 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.305 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.305 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.563 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.563 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.563 "name": "raid_bdev1", 00:17:38.563 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:38.563 "strip_size_kb": 64, 00:17:38.563 "state": "online", 00:17:38.563 "raid_level": "raid5f", 00:17:38.563 "superblock": true, 00:17:38.563 "num_base_bdevs": 4, 00:17:38.563 "num_base_bdevs_discovered": 4, 00:17:38.563 "num_base_bdevs_operational": 4, 00:17:38.563 "process": { 00:17:38.563 "type": "rebuild", 00:17:38.563 "target": "spare", 00:17:38.563 "progress": { 00:17:38.563 "blocks": 44160, 00:17:38.563 "percent": 23 00:17:38.563 } 00:17:38.563 }, 00:17:38.563 "base_bdevs_list": [ 00:17:38.563 { 00:17:38.563 "name": "spare", 00:17:38.563 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:38.563 "is_configured": true, 00:17:38.563 "data_offset": 2048, 00:17:38.563 "data_size": 63488 00:17:38.563 }, 00:17:38.563 { 00:17:38.563 "name": "BaseBdev2", 00:17:38.563 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:38.563 "is_configured": true, 00:17:38.563 "data_offset": 2048, 00:17:38.563 "data_size": 63488 00:17:38.563 }, 00:17:38.563 { 00:17:38.563 "name": "BaseBdev3", 00:17:38.563 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:38.563 "is_configured": true, 00:17:38.563 "data_offset": 2048, 00:17:38.563 "data_size": 63488 00:17:38.563 }, 00:17:38.564 { 00:17:38.564 "name": "BaseBdev4", 00:17:38.564 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:38.564 "is_configured": true, 00:17:38.564 "data_offset": 2048, 00:17:38.564 "data_size": 63488 00:17:38.564 } 00:17:38.564 ] 00:17:38.564 }' 00:17:38.564 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.564 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.564 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.564 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.564 10:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.497 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.497 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.497 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.497 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.497 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.497 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.497 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.497 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.497 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.497 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.497 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.756 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.756 "name": "raid_bdev1", 00:17:39.756 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:39.756 "strip_size_kb": 64, 00:17:39.756 "state": "online", 00:17:39.756 "raid_level": "raid5f", 00:17:39.756 "superblock": true, 00:17:39.756 "num_base_bdevs": 4, 00:17:39.756 "num_base_bdevs_discovered": 4, 00:17:39.756 "num_base_bdevs_operational": 4, 00:17:39.756 "process": { 00:17:39.756 "type": "rebuild", 00:17:39.756 "target": "spare", 00:17:39.756 "progress": { 00:17:39.756 "blocks": 65280, 00:17:39.756 "percent": 34 00:17:39.756 } 00:17:39.756 }, 00:17:39.756 "base_bdevs_list": [ 00:17:39.756 { 00:17:39.756 "name": "spare", 00:17:39.756 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:39.756 "is_configured": true, 00:17:39.756 "data_offset": 2048, 00:17:39.756 "data_size": 63488 00:17:39.756 }, 00:17:39.756 { 00:17:39.756 "name": "BaseBdev2", 00:17:39.756 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:39.756 "is_configured": true, 00:17:39.756 "data_offset": 2048, 00:17:39.756 "data_size": 63488 00:17:39.756 }, 00:17:39.756 { 00:17:39.756 "name": "BaseBdev3", 00:17:39.756 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:39.756 "is_configured": true, 00:17:39.756 "data_offset": 2048, 00:17:39.756 "data_size": 63488 00:17:39.756 }, 00:17:39.756 { 00:17:39.756 "name": "BaseBdev4", 00:17:39.756 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:39.756 "is_configured": true, 00:17:39.756 "data_offset": 2048, 00:17:39.756 "data_size": 63488 00:17:39.756 } 00:17:39.756 ] 00:17:39.756 }' 00:17:39.756 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.756 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.756 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.756 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.756 10:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.690 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.690 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.690 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.690 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.690 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.690 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.690 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.690 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.690 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.690 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.690 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.690 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.690 "name": "raid_bdev1", 00:17:40.690 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:40.690 "strip_size_kb": 64, 00:17:40.690 "state": "online", 00:17:40.690 "raid_level": "raid5f", 00:17:40.690 "superblock": true, 00:17:40.690 "num_base_bdevs": 4, 00:17:40.690 "num_base_bdevs_discovered": 4, 00:17:40.690 "num_base_bdevs_operational": 4, 00:17:40.690 "process": { 00:17:40.690 "type": "rebuild", 00:17:40.690 "target": "spare", 00:17:40.690 "progress": { 00:17:40.690 "blocks": 88320, 00:17:40.690 "percent": 46 00:17:40.690 } 00:17:40.690 }, 00:17:40.690 "base_bdevs_list": [ 00:17:40.690 { 00:17:40.690 "name": "spare", 00:17:40.690 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:40.690 "is_configured": true, 00:17:40.690 "data_offset": 2048, 00:17:40.690 "data_size": 63488 00:17:40.690 }, 00:17:40.690 { 00:17:40.690 "name": "BaseBdev2", 00:17:40.690 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:40.690 "is_configured": true, 00:17:40.690 "data_offset": 2048, 00:17:40.690 "data_size": 63488 00:17:40.690 }, 00:17:40.690 { 00:17:40.690 "name": "BaseBdev3", 00:17:40.690 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:40.690 "is_configured": true, 00:17:40.690 "data_offset": 2048, 00:17:40.690 "data_size": 63488 00:17:40.690 }, 00:17:40.690 { 00:17:40.690 "name": "BaseBdev4", 00:17:40.690 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:40.690 "is_configured": true, 00:17:40.690 "data_offset": 2048, 00:17:40.690 "data_size": 63488 00:17:40.690 } 00:17:40.690 ] 00:17:40.690 }' 00:17:40.690 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.949 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.949 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.949 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.949 10:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.884 10:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.884 10:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.884 10:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.884 10:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.884 10:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.884 10:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.884 10:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.884 10:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.884 10:46:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.884 10:46:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.884 10:46:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.884 10:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.884 "name": "raid_bdev1", 00:17:41.884 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:41.884 "strip_size_kb": 64, 00:17:41.884 "state": "online", 00:17:41.884 "raid_level": "raid5f", 00:17:41.884 "superblock": true, 00:17:41.884 "num_base_bdevs": 4, 00:17:41.884 "num_base_bdevs_discovered": 4, 00:17:41.884 "num_base_bdevs_operational": 4, 00:17:41.884 "process": { 00:17:41.884 "type": "rebuild", 00:17:41.884 "target": "spare", 00:17:41.884 "progress": { 00:17:41.884 "blocks": 109440, 00:17:41.884 "percent": 57 00:17:41.884 } 00:17:41.884 }, 00:17:41.884 "base_bdevs_list": [ 00:17:41.884 { 00:17:41.884 "name": "spare", 00:17:41.884 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:41.884 "is_configured": true, 00:17:41.884 "data_offset": 2048, 00:17:41.884 "data_size": 63488 00:17:41.884 }, 00:17:41.884 { 00:17:41.884 "name": "BaseBdev2", 00:17:41.884 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:41.884 "is_configured": true, 00:17:41.884 "data_offset": 2048, 00:17:41.884 "data_size": 63488 00:17:41.884 }, 00:17:41.884 { 00:17:41.884 "name": "BaseBdev3", 00:17:41.884 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:41.884 "is_configured": true, 00:17:41.884 "data_offset": 2048, 00:17:41.884 "data_size": 63488 00:17:41.884 }, 00:17:41.884 { 00:17:41.884 "name": "BaseBdev4", 00:17:41.884 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:41.884 "is_configured": true, 00:17:41.884 "data_offset": 2048, 00:17:41.884 "data_size": 63488 00:17:41.884 } 00:17:41.884 ] 00:17:41.884 }' 00:17:41.884 10:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.143 10:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.143 10:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.143 10:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.143 10:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:43.076 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.077 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.077 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.077 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.077 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.077 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.077 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.077 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.077 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.077 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.077 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.077 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.077 "name": "raid_bdev1", 00:17:43.077 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:43.077 "strip_size_kb": 64, 00:17:43.077 "state": "online", 00:17:43.077 "raid_level": "raid5f", 00:17:43.077 "superblock": true, 00:17:43.077 "num_base_bdevs": 4, 00:17:43.077 "num_base_bdevs_discovered": 4, 00:17:43.077 "num_base_bdevs_operational": 4, 00:17:43.077 "process": { 00:17:43.077 "type": "rebuild", 00:17:43.077 "target": "spare", 00:17:43.077 "progress": { 00:17:43.077 "blocks": 132480, 00:17:43.077 "percent": 69 00:17:43.077 } 00:17:43.077 }, 00:17:43.077 "base_bdevs_list": [ 00:17:43.077 { 00:17:43.077 "name": "spare", 00:17:43.077 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:43.077 "is_configured": true, 00:17:43.077 "data_offset": 2048, 00:17:43.077 "data_size": 63488 00:17:43.077 }, 00:17:43.077 { 00:17:43.077 "name": "BaseBdev2", 00:17:43.077 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:43.077 "is_configured": true, 00:17:43.077 "data_offset": 2048, 00:17:43.077 "data_size": 63488 00:17:43.077 }, 00:17:43.077 { 00:17:43.077 "name": "BaseBdev3", 00:17:43.077 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:43.077 "is_configured": true, 00:17:43.077 "data_offset": 2048, 00:17:43.077 "data_size": 63488 00:17:43.077 }, 00:17:43.077 { 00:17:43.077 "name": "BaseBdev4", 00:17:43.077 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:43.077 "is_configured": true, 00:17:43.077 "data_offset": 2048, 00:17:43.077 "data_size": 63488 00:17:43.077 } 00:17:43.077 ] 00:17:43.077 }' 00:17:43.077 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.077 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.077 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.335 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.335 10:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.270 "name": "raid_bdev1", 00:17:44.270 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:44.270 "strip_size_kb": 64, 00:17:44.270 "state": "online", 00:17:44.270 "raid_level": "raid5f", 00:17:44.270 "superblock": true, 00:17:44.270 "num_base_bdevs": 4, 00:17:44.270 "num_base_bdevs_discovered": 4, 00:17:44.270 "num_base_bdevs_operational": 4, 00:17:44.270 "process": { 00:17:44.270 "type": "rebuild", 00:17:44.270 "target": "spare", 00:17:44.270 "progress": { 00:17:44.270 "blocks": 153600, 00:17:44.270 "percent": 80 00:17:44.270 } 00:17:44.270 }, 00:17:44.270 "base_bdevs_list": [ 00:17:44.270 { 00:17:44.270 "name": "spare", 00:17:44.270 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:44.270 "is_configured": true, 00:17:44.270 "data_offset": 2048, 00:17:44.270 "data_size": 63488 00:17:44.270 }, 00:17:44.270 { 00:17:44.270 "name": "BaseBdev2", 00:17:44.270 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:44.270 "is_configured": true, 00:17:44.270 "data_offset": 2048, 00:17:44.270 "data_size": 63488 00:17:44.270 }, 00:17:44.270 { 00:17:44.270 "name": "BaseBdev3", 00:17:44.270 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:44.270 "is_configured": true, 00:17:44.270 "data_offset": 2048, 00:17:44.270 "data_size": 63488 00:17:44.270 }, 00:17:44.270 { 00:17:44.270 "name": "BaseBdev4", 00:17:44.270 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:44.270 "is_configured": true, 00:17:44.270 "data_offset": 2048, 00:17:44.270 "data_size": 63488 00:17:44.270 } 00:17:44.270 ] 00:17:44.270 }' 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.270 10:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:45.644 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:45.644 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.644 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.644 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.644 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.644 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.644 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.644 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.644 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.644 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.644 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.644 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.644 "name": "raid_bdev1", 00:17:45.644 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:45.644 "strip_size_kb": 64, 00:17:45.644 "state": "online", 00:17:45.644 "raid_level": "raid5f", 00:17:45.644 "superblock": true, 00:17:45.644 "num_base_bdevs": 4, 00:17:45.644 "num_base_bdevs_discovered": 4, 00:17:45.644 "num_base_bdevs_operational": 4, 00:17:45.644 "process": { 00:17:45.644 "type": "rebuild", 00:17:45.644 "target": "spare", 00:17:45.644 "progress": { 00:17:45.644 "blocks": 176640, 00:17:45.644 "percent": 92 00:17:45.644 } 00:17:45.644 }, 00:17:45.644 "base_bdevs_list": [ 00:17:45.644 { 00:17:45.644 "name": "spare", 00:17:45.644 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:45.644 "is_configured": true, 00:17:45.644 "data_offset": 2048, 00:17:45.644 "data_size": 63488 00:17:45.644 }, 00:17:45.644 { 00:17:45.644 "name": "BaseBdev2", 00:17:45.644 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:45.644 "is_configured": true, 00:17:45.644 "data_offset": 2048, 00:17:45.644 "data_size": 63488 00:17:45.644 }, 00:17:45.644 { 00:17:45.645 "name": "BaseBdev3", 00:17:45.645 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:45.645 "is_configured": true, 00:17:45.645 "data_offset": 2048, 00:17:45.645 "data_size": 63488 00:17:45.645 }, 00:17:45.645 { 00:17:45.645 "name": "BaseBdev4", 00:17:45.645 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:45.645 "is_configured": true, 00:17:45.645 "data_offset": 2048, 00:17:45.645 "data_size": 63488 00:17:45.645 } 00:17:45.645 ] 00:17:45.645 }' 00:17:45.645 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.645 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.645 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.645 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.645 10:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:46.210 [2024-11-15 10:46:07.229621] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:46.210 [2024-11-15 10:46:07.229718] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:46.210 [2024-11-15 10:46:07.229884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.468 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:46.468 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.468 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.468 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.468 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.468 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.468 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.468 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.468 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.468 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.468 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.726 "name": "raid_bdev1", 00:17:46.726 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:46.726 "strip_size_kb": 64, 00:17:46.726 "state": "online", 00:17:46.726 "raid_level": "raid5f", 00:17:46.726 "superblock": true, 00:17:46.726 "num_base_bdevs": 4, 00:17:46.726 "num_base_bdevs_discovered": 4, 00:17:46.726 "num_base_bdevs_operational": 4, 00:17:46.726 "base_bdevs_list": [ 00:17:46.726 { 00:17:46.726 "name": "spare", 00:17:46.726 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:46.726 "is_configured": true, 00:17:46.726 "data_offset": 2048, 00:17:46.726 "data_size": 63488 00:17:46.726 }, 00:17:46.726 { 00:17:46.726 "name": "BaseBdev2", 00:17:46.726 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:46.726 "is_configured": true, 00:17:46.726 "data_offset": 2048, 00:17:46.726 "data_size": 63488 00:17:46.726 }, 00:17:46.726 { 00:17:46.726 "name": "BaseBdev3", 00:17:46.726 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:46.726 "is_configured": true, 00:17:46.726 "data_offset": 2048, 00:17:46.726 "data_size": 63488 00:17:46.726 }, 00:17:46.726 { 00:17:46.726 "name": "BaseBdev4", 00:17:46.726 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:46.726 "is_configured": true, 00:17:46.726 "data_offset": 2048, 00:17:46.726 "data_size": 63488 00:17:46.726 } 00:17:46.726 ] 00:17:46.726 }' 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.726 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.726 "name": "raid_bdev1", 00:17:46.726 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:46.726 "strip_size_kb": 64, 00:17:46.726 "state": "online", 00:17:46.726 "raid_level": "raid5f", 00:17:46.726 "superblock": true, 00:17:46.726 "num_base_bdevs": 4, 00:17:46.726 "num_base_bdevs_discovered": 4, 00:17:46.726 "num_base_bdevs_operational": 4, 00:17:46.726 "base_bdevs_list": [ 00:17:46.726 { 00:17:46.726 "name": "spare", 00:17:46.726 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:46.726 "is_configured": true, 00:17:46.726 "data_offset": 2048, 00:17:46.726 "data_size": 63488 00:17:46.726 }, 00:17:46.726 { 00:17:46.726 "name": "BaseBdev2", 00:17:46.726 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:46.726 "is_configured": true, 00:17:46.726 "data_offset": 2048, 00:17:46.726 "data_size": 63488 00:17:46.726 }, 00:17:46.726 { 00:17:46.726 "name": "BaseBdev3", 00:17:46.726 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:46.726 "is_configured": true, 00:17:46.726 "data_offset": 2048, 00:17:46.726 "data_size": 63488 00:17:46.726 }, 00:17:46.726 { 00:17:46.726 "name": "BaseBdev4", 00:17:46.726 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:46.726 "is_configured": true, 00:17:46.727 "data_offset": 2048, 00:17:46.727 "data_size": 63488 00:17:46.727 } 00:17:46.727 ] 00:17:46.727 }' 00:17:46.727 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.727 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.727 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.985 "name": "raid_bdev1", 00:17:46.985 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:46.985 "strip_size_kb": 64, 00:17:46.985 "state": "online", 00:17:46.985 "raid_level": "raid5f", 00:17:46.985 "superblock": true, 00:17:46.985 "num_base_bdevs": 4, 00:17:46.985 "num_base_bdevs_discovered": 4, 00:17:46.985 "num_base_bdevs_operational": 4, 00:17:46.985 "base_bdevs_list": [ 00:17:46.985 { 00:17:46.985 "name": "spare", 00:17:46.985 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:46.985 "is_configured": true, 00:17:46.985 "data_offset": 2048, 00:17:46.985 "data_size": 63488 00:17:46.985 }, 00:17:46.985 { 00:17:46.985 "name": "BaseBdev2", 00:17:46.985 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:46.985 "is_configured": true, 00:17:46.985 "data_offset": 2048, 00:17:46.985 "data_size": 63488 00:17:46.985 }, 00:17:46.985 { 00:17:46.985 "name": "BaseBdev3", 00:17:46.985 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:46.985 "is_configured": true, 00:17:46.985 "data_offset": 2048, 00:17:46.985 "data_size": 63488 00:17:46.985 }, 00:17:46.985 { 00:17:46.985 "name": "BaseBdev4", 00:17:46.985 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:46.985 "is_configured": true, 00:17:46.985 "data_offset": 2048, 00:17:46.985 "data_size": 63488 00:17:46.985 } 00:17:46.985 ] 00:17:46.985 }' 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.985 10:46:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.244 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:47.244 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.244 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.244 [2024-11-15 10:46:08.401598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:47.244 [2024-11-15 10:46:08.401641] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.244 [2024-11-15 10:46:08.401738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.244 [2024-11-15 10:46:08.401860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.244 [2024-11-15 10:46:08.401890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:47.502 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:47.760 /dev/nbd0 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.760 1+0 records in 00:17:47.760 1+0 records out 00:17:47.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252329 s, 16.2 MB/s 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:47.760 10:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:48.018 /dev/nbd1 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:48.018 1+0 records in 00:17:48.018 1+0 records out 00:17:48.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408775 s, 10.0 MB/s 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:48.018 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:48.277 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:48.277 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.277 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:48.277 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:48.277 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:48.277 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.277 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:48.535 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:48.535 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:48.535 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:48.535 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:48.535 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:48.535 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:48.535 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:48.535 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:48.535 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.536 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:49.103 10:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.103 [2024-11-15 10:46:10.022681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:49.103 [2024-11-15 10:46:10.022765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.103 [2024-11-15 10:46:10.022801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:49.103 [2024-11-15 10:46:10.022817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.103 [2024-11-15 10:46:10.025758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.103 [2024-11-15 10:46:10.025807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:49.103 [2024-11-15 10:46:10.025926] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:49.103 [2024-11-15 10:46:10.026007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.103 [2024-11-15 10:46:10.026193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.103 [2024-11-15 10:46:10.026343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:49.103 [2024-11-15 10:46:10.026469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:49.103 spare 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.103 [2024-11-15 10:46:10.126654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:49.103 [2024-11-15 10:46:10.126743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:49.103 [2024-11-15 10:46:10.127233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:49.103 [2024-11-15 10:46:10.134155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:49.103 [2024-11-15 10:46:10.134188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:49.103 [2024-11-15 10:46:10.134453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.103 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.103 "name": "raid_bdev1", 00:17:49.103 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:49.103 "strip_size_kb": 64, 00:17:49.103 "state": "online", 00:17:49.103 "raid_level": "raid5f", 00:17:49.103 "superblock": true, 00:17:49.103 "num_base_bdevs": 4, 00:17:49.103 "num_base_bdevs_discovered": 4, 00:17:49.103 "num_base_bdevs_operational": 4, 00:17:49.103 "base_bdevs_list": [ 00:17:49.103 { 00:17:49.103 "name": "spare", 00:17:49.103 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:49.103 "is_configured": true, 00:17:49.103 "data_offset": 2048, 00:17:49.103 "data_size": 63488 00:17:49.103 }, 00:17:49.103 { 00:17:49.103 "name": "BaseBdev2", 00:17:49.103 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:49.103 "is_configured": true, 00:17:49.103 "data_offset": 2048, 00:17:49.103 "data_size": 63488 00:17:49.103 }, 00:17:49.103 { 00:17:49.103 "name": "BaseBdev3", 00:17:49.103 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:49.103 "is_configured": true, 00:17:49.103 "data_offset": 2048, 00:17:49.103 "data_size": 63488 00:17:49.103 }, 00:17:49.103 { 00:17:49.103 "name": "BaseBdev4", 00:17:49.103 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:49.103 "is_configured": true, 00:17:49.103 "data_offset": 2048, 00:17:49.103 "data_size": 63488 00:17:49.103 } 00:17:49.103 ] 00:17:49.103 }' 00:17:49.104 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.104 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.670 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.670 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.670 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.670 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.670 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.670 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.670 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.670 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.670 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.670 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.670 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.670 "name": "raid_bdev1", 00:17:49.670 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:49.670 "strip_size_kb": 64, 00:17:49.670 "state": "online", 00:17:49.670 "raid_level": "raid5f", 00:17:49.670 "superblock": true, 00:17:49.670 "num_base_bdevs": 4, 00:17:49.670 "num_base_bdevs_discovered": 4, 00:17:49.670 "num_base_bdevs_operational": 4, 00:17:49.670 "base_bdevs_list": [ 00:17:49.670 { 00:17:49.670 "name": "spare", 00:17:49.670 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:49.670 "is_configured": true, 00:17:49.671 "data_offset": 2048, 00:17:49.671 "data_size": 63488 00:17:49.671 }, 00:17:49.671 { 00:17:49.671 "name": "BaseBdev2", 00:17:49.671 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:49.671 "is_configured": true, 00:17:49.671 "data_offset": 2048, 00:17:49.671 "data_size": 63488 00:17:49.671 }, 00:17:49.671 { 00:17:49.671 "name": "BaseBdev3", 00:17:49.671 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:49.671 "is_configured": true, 00:17:49.671 "data_offset": 2048, 00:17:49.671 "data_size": 63488 00:17:49.671 }, 00:17:49.671 { 00:17:49.671 "name": "BaseBdev4", 00:17:49.671 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:49.671 "is_configured": true, 00:17:49.671 "data_offset": 2048, 00:17:49.671 "data_size": 63488 00:17:49.671 } 00:17:49.671 ] 00:17:49.671 }' 00:17:49.671 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.671 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.671 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.671 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.671 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.671 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:49.671 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.671 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.929 [2024-11-15 10:46:10.862778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.929 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.929 "name": "raid_bdev1", 00:17:49.929 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:49.929 "strip_size_kb": 64, 00:17:49.929 "state": "online", 00:17:49.929 "raid_level": "raid5f", 00:17:49.930 "superblock": true, 00:17:49.930 "num_base_bdevs": 4, 00:17:49.930 "num_base_bdevs_discovered": 3, 00:17:49.930 "num_base_bdevs_operational": 3, 00:17:49.930 "base_bdevs_list": [ 00:17:49.930 { 00:17:49.930 "name": null, 00:17:49.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.930 "is_configured": false, 00:17:49.930 "data_offset": 0, 00:17:49.930 "data_size": 63488 00:17:49.930 }, 00:17:49.930 { 00:17:49.930 "name": "BaseBdev2", 00:17:49.930 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:49.930 "is_configured": true, 00:17:49.930 "data_offset": 2048, 00:17:49.930 "data_size": 63488 00:17:49.930 }, 00:17:49.930 { 00:17:49.930 "name": "BaseBdev3", 00:17:49.930 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:49.930 "is_configured": true, 00:17:49.930 "data_offset": 2048, 00:17:49.930 "data_size": 63488 00:17:49.930 }, 00:17:49.930 { 00:17:49.930 "name": "BaseBdev4", 00:17:49.930 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:49.930 "is_configured": true, 00:17:49.930 "data_offset": 2048, 00:17:49.930 "data_size": 63488 00:17:49.930 } 00:17:49.930 ] 00:17:49.930 }' 00:17:49.930 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.930 10:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.497 10:46:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:50.497 10:46:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.497 10:46:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.497 [2024-11-15 10:46:11.399065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.497 [2024-11-15 10:46:11.399310] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:50.497 [2024-11-15 10:46:11.399348] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:50.497 [2024-11-15 10:46:11.399404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.497 [2024-11-15 10:46:11.413766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:50.497 10:46:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.497 10:46:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:50.497 [2024-11-15 10:46:11.423096] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.431 "name": "raid_bdev1", 00:17:51.431 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:51.431 "strip_size_kb": 64, 00:17:51.431 "state": "online", 00:17:51.431 "raid_level": "raid5f", 00:17:51.431 "superblock": true, 00:17:51.431 "num_base_bdevs": 4, 00:17:51.431 "num_base_bdevs_discovered": 4, 00:17:51.431 "num_base_bdevs_operational": 4, 00:17:51.431 "process": { 00:17:51.431 "type": "rebuild", 00:17:51.431 "target": "spare", 00:17:51.431 "progress": { 00:17:51.431 "blocks": 17280, 00:17:51.431 "percent": 9 00:17:51.431 } 00:17:51.431 }, 00:17:51.431 "base_bdevs_list": [ 00:17:51.431 { 00:17:51.431 "name": "spare", 00:17:51.431 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:51.431 "is_configured": true, 00:17:51.431 "data_offset": 2048, 00:17:51.431 "data_size": 63488 00:17:51.431 }, 00:17:51.431 { 00:17:51.431 "name": "BaseBdev2", 00:17:51.431 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:51.431 "is_configured": true, 00:17:51.431 "data_offset": 2048, 00:17:51.431 "data_size": 63488 00:17:51.431 }, 00:17:51.431 { 00:17:51.431 "name": "BaseBdev3", 00:17:51.431 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:51.431 "is_configured": true, 00:17:51.431 "data_offset": 2048, 00:17:51.431 "data_size": 63488 00:17:51.431 }, 00:17:51.431 { 00:17:51.431 "name": "BaseBdev4", 00:17:51.431 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:51.431 "is_configured": true, 00:17:51.431 "data_offset": 2048, 00:17:51.431 "data_size": 63488 00:17:51.431 } 00:17:51.431 ] 00:17:51.431 }' 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.431 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.431 [2024-11-15 10:46:12.580910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.690 [2024-11-15 10:46:12.635443] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:51.690 [2024-11-15 10:46:12.635541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.690 [2024-11-15 10:46:12.635582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.690 [2024-11-15 10:46:12.635630] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.690 "name": "raid_bdev1", 00:17:51.690 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:51.690 "strip_size_kb": 64, 00:17:51.690 "state": "online", 00:17:51.690 "raid_level": "raid5f", 00:17:51.690 "superblock": true, 00:17:51.690 "num_base_bdevs": 4, 00:17:51.690 "num_base_bdevs_discovered": 3, 00:17:51.690 "num_base_bdevs_operational": 3, 00:17:51.690 "base_bdevs_list": [ 00:17:51.690 { 00:17:51.690 "name": null, 00:17:51.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.690 "is_configured": false, 00:17:51.690 "data_offset": 0, 00:17:51.690 "data_size": 63488 00:17:51.690 }, 00:17:51.690 { 00:17:51.690 "name": "BaseBdev2", 00:17:51.690 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:51.690 "is_configured": true, 00:17:51.690 "data_offset": 2048, 00:17:51.690 "data_size": 63488 00:17:51.690 }, 00:17:51.690 { 00:17:51.690 "name": "BaseBdev3", 00:17:51.690 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:51.690 "is_configured": true, 00:17:51.690 "data_offset": 2048, 00:17:51.690 "data_size": 63488 00:17:51.690 }, 00:17:51.690 { 00:17:51.690 "name": "BaseBdev4", 00:17:51.690 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:51.690 "is_configured": true, 00:17:51.690 "data_offset": 2048, 00:17:51.690 "data_size": 63488 00:17:51.690 } 00:17:51.690 ] 00:17:51.690 }' 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.690 10:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.257 10:46:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:52.257 10:46:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.257 10:46:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.257 [2024-11-15 10:46:13.171433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:52.257 [2024-11-15 10:46:13.171566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.257 [2024-11-15 10:46:13.171607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:52.257 [2024-11-15 10:46:13.171627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.257 [2024-11-15 10:46:13.172234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.257 [2024-11-15 10:46:13.172277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:52.257 [2024-11-15 10:46:13.172395] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:52.257 [2024-11-15 10:46:13.172420] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:52.257 [2024-11-15 10:46:13.172434] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:52.257 [2024-11-15 10:46:13.172471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:52.257 [2024-11-15 10:46:13.186007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:52.257 spare 00:17:52.257 10:46:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.257 10:46:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:52.257 [2024-11-15 10:46:13.194665] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:53.192 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.192 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.192 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.192 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.192 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.192 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.192 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.192 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.192 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.192 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.192 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.192 "name": "raid_bdev1", 00:17:53.192 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:53.192 "strip_size_kb": 64, 00:17:53.192 "state": "online", 00:17:53.192 "raid_level": "raid5f", 00:17:53.192 "superblock": true, 00:17:53.192 "num_base_bdevs": 4, 00:17:53.192 "num_base_bdevs_discovered": 4, 00:17:53.192 "num_base_bdevs_operational": 4, 00:17:53.192 "process": { 00:17:53.192 "type": "rebuild", 00:17:53.192 "target": "spare", 00:17:53.192 "progress": { 00:17:53.192 "blocks": 17280, 00:17:53.192 "percent": 9 00:17:53.192 } 00:17:53.192 }, 00:17:53.192 "base_bdevs_list": [ 00:17:53.192 { 00:17:53.192 "name": "spare", 00:17:53.192 "uuid": "4090a330-65a6-5519-8c82-f12520887ced", 00:17:53.192 "is_configured": true, 00:17:53.192 "data_offset": 2048, 00:17:53.192 "data_size": 63488 00:17:53.192 }, 00:17:53.192 { 00:17:53.192 "name": "BaseBdev2", 00:17:53.192 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:53.192 "is_configured": true, 00:17:53.192 "data_offset": 2048, 00:17:53.192 "data_size": 63488 00:17:53.192 }, 00:17:53.192 { 00:17:53.192 "name": "BaseBdev3", 00:17:53.192 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:53.192 "is_configured": true, 00:17:53.192 "data_offset": 2048, 00:17:53.192 "data_size": 63488 00:17:53.192 }, 00:17:53.192 { 00:17:53.192 "name": "BaseBdev4", 00:17:53.192 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:53.192 "is_configured": true, 00:17:53.192 "data_offset": 2048, 00:17:53.192 "data_size": 63488 00:17:53.192 } 00:17:53.192 ] 00:17:53.192 }' 00:17:53.192 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.192 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.192 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.451 [2024-11-15 10:46:14.360462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.451 [2024-11-15 10:46:14.407656] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:53.451 [2024-11-15 10:46:14.407733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.451 [2024-11-15 10:46:14.407761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.451 [2024-11-15 10:46:14.407773] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.451 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.451 "name": "raid_bdev1", 00:17:53.451 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:53.451 "strip_size_kb": 64, 00:17:53.452 "state": "online", 00:17:53.452 "raid_level": "raid5f", 00:17:53.452 "superblock": true, 00:17:53.452 "num_base_bdevs": 4, 00:17:53.452 "num_base_bdevs_discovered": 3, 00:17:53.452 "num_base_bdevs_operational": 3, 00:17:53.452 "base_bdevs_list": [ 00:17:53.452 { 00:17:53.452 "name": null, 00:17:53.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.452 "is_configured": false, 00:17:53.452 "data_offset": 0, 00:17:53.452 "data_size": 63488 00:17:53.452 }, 00:17:53.452 { 00:17:53.452 "name": "BaseBdev2", 00:17:53.452 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:53.452 "is_configured": true, 00:17:53.452 "data_offset": 2048, 00:17:53.452 "data_size": 63488 00:17:53.452 }, 00:17:53.452 { 00:17:53.452 "name": "BaseBdev3", 00:17:53.452 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:53.452 "is_configured": true, 00:17:53.452 "data_offset": 2048, 00:17:53.452 "data_size": 63488 00:17:53.452 }, 00:17:53.452 { 00:17:53.452 "name": "BaseBdev4", 00:17:53.452 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:53.452 "is_configured": true, 00:17:53.452 "data_offset": 2048, 00:17:53.452 "data_size": 63488 00:17:53.452 } 00:17:53.452 ] 00:17:53.452 }' 00:17:53.452 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.452 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.037 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:54.037 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.037 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:54.037 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:54.037 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.037 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.037 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.037 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.037 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.037 10:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.037 10:46:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.037 "name": "raid_bdev1", 00:17:54.037 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:54.037 "strip_size_kb": 64, 00:17:54.037 "state": "online", 00:17:54.037 "raid_level": "raid5f", 00:17:54.037 "superblock": true, 00:17:54.037 "num_base_bdevs": 4, 00:17:54.037 "num_base_bdevs_discovered": 3, 00:17:54.037 "num_base_bdevs_operational": 3, 00:17:54.037 "base_bdevs_list": [ 00:17:54.037 { 00:17:54.037 "name": null, 00:17:54.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.037 "is_configured": false, 00:17:54.037 "data_offset": 0, 00:17:54.037 "data_size": 63488 00:17:54.037 }, 00:17:54.037 { 00:17:54.037 "name": "BaseBdev2", 00:17:54.037 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:54.037 "is_configured": true, 00:17:54.037 "data_offset": 2048, 00:17:54.037 "data_size": 63488 00:17:54.037 }, 00:17:54.037 { 00:17:54.037 "name": "BaseBdev3", 00:17:54.037 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:54.037 "is_configured": true, 00:17:54.037 "data_offset": 2048, 00:17:54.037 "data_size": 63488 00:17:54.037 }, 00:17:54.037 { 00:17:54.037 "name": "BaseBdev4", 00:17:54.037 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:54.037 "is_configured": true, 00:17:54.037 "data_offset": 2048, 00:17:54.037 "data_size": 63488 00:17:54.037 } 00:17:54.037 ] 00:17:54.037 }' 00:17:54.037 10:46:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.037 10:46:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:54.037 10:46:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.037 10:46:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:54.037 10:46:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:54.037 10:46:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.037 10:46:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.037 10:46:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.037 10:46:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:54.037 10:46:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.037 10:46:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.037 [2024-11-15 10:46:15.152943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:54.037 [2024-11-15 10:46:15.153010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.037 [2024-11-15 10:46:15.153043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:54.037 [2024-11-15 10:46:15.153058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.037 [2024-11-15 10:46:15.153651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.037 [2024-11-15 10:46:15.153694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:54.037 [2024-11-15 10:46:15.153798] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:54.037 [2024-11-15 10:46:15.153819] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:54.037 [2024-11-15 10:46:15.153835] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:54.037 [2024-11-15 10:46:15.153848] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:54.037 BaseBdev1 00:17:54.037 10:46:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.037 10:46:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.411 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.411 "name": "raid_bdev1", 00:17:55.411 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:55.411 "strip_size_kb": 64, 00:17:55.411 "state": "online", 00:17:55.411 "raid_level": "raid5f", 00:17:55.411 "superblock": true, 00:17:55.411 "num_base_bdevs": 4, 00:17:55.411 "num_base_bdevs_discovered": 3, 00:17:55.411 "num_base_bdevs_operational": 3, 00:17:55.411 "base_bdevs_list": [ 00:17:55.411 { 00:17:55.411 "name": null, 00:17:55.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.411 "is_configured": false, 00:17:55.411 "data_offset": 0, 00:17:55.411 "data_size": 63488 00:17:55.411 }, 00:17:55.411 { 00:17:55.411 "name": "BaseBdev2", 00:17:55.411 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:55.411 "is_configured": true, 00:17:55.411 "data_offset": 2048, 00:17:55.411 "data_size": 63488 00:17:55.411 }, 00:17:55.411 { 00:17:55.411 "name": "BaseBdev3", 00:17:55.411 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:55.411 "is_configured": true, 00:17:55.411 "data_offset": 2048, 00:17:55.411 "data_size": 63488 00:17:55.411 }, 00:17:55.411 { 00:17:55.411 "name": "BaseBdev4", 00:17:55.411 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:55.411 "is_configured": true, 00:17:55.411 "data_offset": 2048, 00:17:55.411 "data_size": 63488 00:17:55.412 } 00:17:55.412 ] 00:17:55.412 }' 00:17:55.412 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.412 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.669 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:55.669 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.669 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:55.669 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:55.669 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.669 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.669 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.669 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.669 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.669 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.669 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.669 "name": "raid_bdev1", 00:17:55.669 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:55.669 "strip_size_kb": 64, 00:17:55.669 "state": "online", 00:17:55.669 "raid_level": "raid5f", 00:17:55.669 "superblock": true, 00:17:55.669 "num_base_bdevs": 4, 00:17:55.669 "num_base_bdevs_discovered": 3, 00:17:55.669 "num_base_bdevs_operational": 3, 00:17:55.669 "base_bdevs_list": [ 00:17:55.669 { 00:17:55.669 "name": null, 00:17:55.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.669 "is_configured": false, 00:17:55.669 "data_offset": 0, 00:17:55.669 "data_size": 63488 00:17:55.669 }, 00:17:55.669 { 00:17:55.669 "name": "BaseBdev2", 00:17:55.670 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:55.670 "is_configured": true, 00:17:55.670 "data_offset": 2048, 00:17:55.670 "data_size": 63488 00:17:55.670 }, 00:17:55.670 { 00:17:55.670 "name": "BaseBdev3", 00:17:55.670 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:55.670 "is_configured": true, 00:17:55.670 "data_offset": 2048, 00:17:55.670 "data_size": 63488 00:17:55.670 }, 00:17:55.670 { 00:17:55.670 "name": "BaseBdev4", 00:17:55.670 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:55.670 "is_configured": true, 00:17:55.670 "data_offset": 2048, 00:17:55.670 "data_size": 63488 00:17:55.670 } 00:17:55.670 ] 00:17:55.670 }' 00:17:55.670 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.670 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.670 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.927 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.927 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.927 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:55.927 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.927 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:55.927 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.927 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:55.927 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.928 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.928 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.928 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.928 [2024-11-15 10:46:16.845530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.928 [2024-11-15 10:46:16.845752] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:55.928 [2024-11-15 10:46:16.845777] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:55.928 request: 00:17:55.928 { 00:17:55.928 "base_bdev": "BaseBdev1", 00:17:55.928 "raid_bdev": "raid_bdev1", 00:17:55.928 "method": "bdev_raid_add_base_bdev", 00:17:55.928 "req_id": 1 00:17:55.928 } 00:17:55.928 Got JSON-RPC error response 00:17:55.928 response: 00:17:55.928 { 00:17:55.928 "code": -22, 00:17:55.928 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:55.928 } 00:17:55.928 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:55.928 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:55.928 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.928 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.928 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.928 10:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.860 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.860 "name": "raid_bdev1", 00:17:56.860 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:56.860 "strip_size_kb": 64, 00:17:56.860 "state": "online", 00:17:56.860 "raid_level": "raid5f", 00:17:56.860 "superblock": true, 00:17:56.860 "num_base_bdevs": 4, 00:17:56.860 "num_base_bdevs_discovered": 3, 00:17:56.860 "num_base_bdevs_operational": 3, 00:17:56.860 "base_bdevs_list": [ 00:17:56.860 { 00:17:56.861 "name": null, 00:17:56.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.861 "is_configured": false, 00:17:56.861 "data_offset": 0, 00:17:56.861 "data_size": 63488 00:17:56.861 }, 00:17:56.861 { 00:17:56.861 "name": "BaseBdev2", 00:17:56.861 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:56.861 "is_configured": true, 00:17:56.861 "data_offset": 2048, 00:17:56.861 "data_size": 63488 00:17:56.861 }, 00:17:56.861 { 00:17:56.861 "name": "BaseBdev3", 00:17:56.861 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:56.861 "is_configured": true, 00:17:56.861 "data_offset": 2048, 00:17:56.861 "data_size": 63488 00:17:56.861 }, 00:17:56.861 { 00:17:56.861 "name": "BaseBdev4", 00:17:56.861 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:56.861 "is_configured": true, 00:17:56.861 "data_offset": 2048, 00:17:56.861 "data_size": 63488 00:17:56.861 } 00:17:56.861 ] 00:17:56.861 }' 00:17:56.861 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.861 10:46:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.426 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:57.426 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.426 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:57.426 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:57.426 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.426 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.426 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.426 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.427 "name": "raid_bdev1", 00:17:57.427 "uuid": "bc7005d5-37ef-45bb-b416-abf510390be1", 00:17:57.427 "strip_size_kb": 64, 00:17:57.427 "state": "online", 00:17:57.427 "raid_level": "raid5f", 00:17:57.427 "superblock": true, 00:17:57.427 "num_base_bdevs": 4, 00:17:57.427 "num_base_bdevs_discovered": 3, 00:17:57.427 "num_base_bdevs_operational": 3, 00:17:57.427 "base_bdevs_list": [ 00:17:57.427 { 00:17:57.427 "name": null, 00:17:57.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.427 "is_configured": false, 00:17:57.427 "data_offset": 0, 00:17:57.427 "data_size": 63488 00:17:57.427 }, 00:17:57.427 { 00:17:57.427 "name": "BaseBdev2", 00:17:57.427 "uuid": "34786f23-0246-5bbc-9f5e-ce7883d8cee4", 00:17:57.427 "is_configured": true, 00:17:57.427 "data_offset": 2048, 00:17:57.427 "data_size": 63488 00:17:57.427 }, 00:17:57.427 { 00:17:57.427 "name": "BaseBdev3", 00:17:57.427 "uuid": "594c42bd-2013-5958-9071-515f21a29886", 00:17:57.427 "is_configured": true, 00:17:57.427 "data_offset": 2048, 00:17:57.427 "data_size": 63488 00:17:57.427 }, 00:17:57.427 { 00:17:57.427 "name": "BaseBdev4", 00:17:57.427 "uuid": "8fefe0c0-b21c-54a2-9916-a2cc1ebb58ce", 00:17:57.427 "is_configured": true, 00:17:57.427 "data_offset": 2048, 00:17:57.427 "data_size": 63488 00:17:57.427 } 00:17:57.427 ] 00:17:57.427 }' 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85470 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85470 ']' 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85470 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85470 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.427 killing process with pid 85470 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85470' 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85470 00:17:57.427 Received shutdown signal, test time was about 60.000000 seconds 00:17:57.427 00:17:57.427 Latency(us) 00:17:57.427 [2024-11-15T10:46:18.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.427 [2024-11-15T10:46:18.589Z] =================================================================================================================== 00:17:57.427 [2024-11-15T10:46:18.589Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.427 [2024-11-15 10:46:18.570090] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:57.427 10:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85470 00:17:57.427 [2024-11-15 10:46:18.570243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.427 [2024-11-15 10:46:18.570351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.427 [2024-11-15 10:46:18.570373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:57.996 [2024-11-15 10:46:19.030302] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.931 10:46:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:58.931 00:17:58.931 real 0m28.781s 00:17:58.931 user 0m37.648s 00:17:58.931 sys 0m2.821s 00:17:58.931 10:46:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.931 ************************************ 00:17:58.931 END TEST raid5f_rebuild_test_sb 00:17:58.931 ************************************ 00:17:58.931 10:46:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.190 10:46:20 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:59.190 10:46:20 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:59.190 10:46:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:59.190 10:46:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.190 10:46:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.190 ************************************ 00:17:59.190 START TEST raid_state_function_test_sb_4k 00:17:59.190 ************************************ 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86293 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86293' 00:17:59.190 Process raid pid: 86293 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86293 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86293 ']' 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.190 10:46:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.190 [2024-11-15 10:46:20.255792] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:17:59.190 [2024-11-15 10:46:20.255978] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.449 [2024-11-15 10:46:20.450417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.449 [2024-11-15 10:46:20.584308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.707 [2024-11-15 10:46:20.793343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.707 [2024-11-15 10:46:20.793405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.273 [2024-11-15 10:46:21.246809] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:00.273 [2024-11-15 10:46:21.246875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:00.273 [2024-11-15 10:46:21.246892] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:00.273 [2024-11-15 10:46:21.246915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.273 "name": "Existed_Raid", 00:18:00.273 "uuid": "bd7684fa-ec27-452f-a7a9-130b993d657f", 00:18:00.273 "strip_size_kb": 0, 00:18:00.273 "state": "configuring", 00:18:00.273 "raid_level": "raid1", 00:18:00.273 "superblock": true, 00:18:00.273 "num_base_bdevs": 2, 00:18:00.273 "num_base_bdevs_discovered": 0, 00:18:00.273 "num_base_bdevs_operational": 2, 00:18:00.273 "base_bdevs_list": [ 00:18:00.273 { 00:18:00.273 "name": "BaseBdev1", 00:18:00.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.273 "is_configured": false, 00:18:00.273 "data_offset": 0, 00:18:00.273 "data_size": 0 00:18:00.273 }, 00:18:00.273 { 00:18:00.273 "name": "BaseBdev2", 00:18:00.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.273 "is_configured": false, 00:18:00.273 "data_offset": 0, 00:18:00.273 "data_size": 0 00:18:00.273 } 00:18:00.273 ] 00:18:00.273 }' 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.273 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.840 [2024-11-15 10:46:21.766916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:00.840 [2024-11-15 10:46:21.766971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.840 [2024-11-15 10:46:21.774887] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:00.840 [2024-11-15 10:46:21.774945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:00.840 [2024-11-15 10:46:21.774961] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:00.840 [2024-11-15 10:46:21.774980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.840 [2024-11-15 10:46:21.820068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:00.840 BaseBdev1 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.840 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.840 [ 00:18:00.840 { 00:18:00.840 "name": "BaseBdev1", 00:18:00.840 "aliases": [ 00:18:00.840 "7648ec71-41b9-4c47-8c86-ed9b85581753" 00:18:00.840 ], 00:18:00.840 "product_name": "Malloc disk", 00:18:00.840 "block_size": 4096, 00:18:00.840 "num_blocks": 8192, 00:18:00.840 "uuid": "7648ec71-41b9-4c47-8c86-ed9b85581753", 00:18:00.840 "assigned_rate_limits": { 00:18:00.840 "rw_ios_per_sec": 0, 00:18:00.840 "rw_mbytes_per_sec": 0, 00:18:00.840 "r_mbytes_per_sec": 0, 00:18:00.840 "w_mbytes_per_sec": 0 00:18:00.840 }, 00:18:00.840 "claimed": true, 00:18:00.840 "claim_type": "exclusive_write", 00:18:00.840 "zoned": false, 00:18:00.840 "supported_io_types": { 00:18:00.840 "read": true, 00:18:00.840 "write": true, 00:18:00.840 "unmap": true, 00:18:00.840 "flush": true, 00:18:00.841 "reset": true, 00:18:00.841 "nvme_admin": false, 00:18:00.841 "nvme_io": false, 00:18:00.841 "nvme_io_md": false, 00:18:00.841 "write_zeroes": true, 00:18:00.841 "zcopy": true, 00:18:00.841 "get_zone_info": false, 00:18:00.841 "zone_management": false, 00:18:00.841 "zone_append": false, 00:18:00.841 "compare": false, 00:18:00.841 "compare_and_write": false, 00:18:00.841 "abort": true, 00:18:00.841 "seek_hole": false, 00:18:00.841 "seek_data": false, 00:18:00.841 "copy": true, 00:18:00.841 "nvme_iov_md": false 00:18:00.841 }, 00:18:00.841 "memory_domains": [ 00:18:00.841 { 00:18:00.841 "dma_device_id": "system", 00:18:00.841 "dma_device_type": 1 00:18:00.841 }, 00:18:00.841 { 00:18:00.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.841 "dma_device_type": 2 00:18:00.841 } 00:18:00.841 ], 00:18:00.841 "driver_specific": {} 00:18:00.841 } 00:18:00.841 ] 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.841 "name": "Existed_Raid", 00:18:00.841 "uuid": "682b1b80-d50c-43be-9768-0b553c673678", 00:18:00.841 "strip_size_kb": 0, 00:18:00.841 "state": "configuring", 00:18:00.841 "raid_level": "raid1", 00:18:00.841 "superblock": true, 00:18:00.841 "num_base_bdevs": 2, 00:18:00.841 "num_base_bdevs_discovered": 1, 00:18:00.841 "num_base_bdevs_operational": 2, 00:18:00.841 "base_bdevs_list": [ 00:18:00.841 { 00:18:00.841 "name": "BaseBdev1", 00:18:00.841 "uuid": "7648ec71-41b9-4c47-8c86-ed9b85581753", 00:18:00.841 "is_configured": true, 00:18:00.841 "data_offset": 256, 00:18:00.841 "data_size": 7936 00:18:00.841 }, 00:18:00.841 { 00:18:00.841 "name": "BaseBdev2", 00:18:00.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.841 "is_configured": false, 00:18:00.841 "data_offset": 0, 00:18:00.841 "data_size": 0 00:18:00.841 } 00:18:00.841 ] 00:18:00.841 }' 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.841 10:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.408 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:01.408 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.408 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.408 [2024-11-15 10:46:22.380287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:01.409 [2024-11-15 10:46:22.380367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.409 [2024-11-15 10:46:22.388311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.409 [2024-11-15 10:46:22.390819] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.409 [2024-11-15 10:46:22.390874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.409 "name": "Existed_Raid", 00:18:01.409 "uuid": "a121c7ed-7eeb-4ef0-a44a-a7ad70864289", 00:18:01.409 "strip_size_kb": 0, 00:18:01.409 "state": "configuring", 00:18:01.409 "raid_level": "raid1", 00:18:01.409 "superblock": true, 00:18:01.409 "num_base_bdevs": 2, 00:18:01.409 "num_base_bdevs_discovered": 1, 00:18:01.409 "num_base_bdevs_operational": 2, 00:18:01.409 "base_bdevs_list": [ 00:18:01.409 { 00:18:01.409 "name": "BaseBdev1", 00:18:01.409 "uuid": "7648ec71-41b9-4c47-8c86-ed9b85581753", 00:18:01.409 "is_configured": true, 00:18:01.409 "data_offset": 256, 00:18:01.409 "data_size": 7936 00:18:01.409 }, 00:18:01.409 { 00:18:01.409 "name": "BaseBdev2", 00:18:01.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.409 "is_configured": false, 00:18:01.409 "data_offset": 0, 00:18:01.409 "data_size": 0 00:18:01.409 } 00:18:01.409 ] 00:18:01.409 }' 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.409 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.976 [2024-11-15 10:46:22.935040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.976 [2024-11-15 10:46:22.935343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:01.976 [2024-11-15 10:46:22.935371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:01.976 BaseBdev2 00:18:01.976 [2024-11-15 10:46:22.935834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:01.976 [2024-11-15 10:46:22.936053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:01.976 [2024-11-15 10:46:22.936084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.976 [2024-11-15 10:46:22.936277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.976 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.976 [ 00:18:01.976 { 00:18:01.976 "name": "BaseBdev2", 00:18:01.976 "aliases": [ 00:18:01.976 "4cfd47cc-a5c8-4f56-8306-f46403be55e7" 00:18:01.976 ], 00:18:01.976 "product_name": "Malloc disk", 00:18:01.976 "block_size": 4096, 00:18:01.976 "num_blocks": 8192, 00:18:01.976 "uuid": "4cfd47cc-a5c8-4f56-8306-f46403be55e7", 00:18:01.976 "assigned_rate_limits": { 00:18:01.976 "rw_ios_per_sec": 0, 00:18:01.976 "rw_mbytes_per_sec": 0, 00:18:01.976 "r_mbytes_per_sec": 0, 00:18:01.976 "w_mbytes_per_sec": 0 00:18:01.976 }, 00:18:01.976 "claimed": true, 00:18:01.976 "claim_type": "exclusive_write", 00:18:01.976 "zoned": false, 00:18:01.976 "supported_io_types": { 00:18:01.976 "read": true, 00:18:01.976 "write": true, 00:18:01.976 "unmap": true, 00:18:01.976 "flush": true, 00:18:01.976 "reset": true, 00:18:01.976 "nvme_admin": false, 00:18:01.976 "nvme_io": false, 00:18:01.976 "nvme_io_md": false, 00:18:01.977 "write_zeroes": true, 00:18:01.977 "zcopy": true, 00:18:01.977 "get_zone_info": false, 00:18:01.977 "zone_management": false, 00:18:01.977 "zone_append": false, 00:18:01.977 "compare": false, 00:18:01.977 "compare_and_write": false, 00:18:01.977 "abort": true, 00:18:01.977 "seek_hole": false, 00:18:01.977 "seek_data": false, 00:18:01.977 "copy": true, 00:18:01.977 "nvme_iov_md": false 00:18:01.977 }, 00:18:01.977 "memory_domains": [ 00:18:01.977 { 00:18:01.977 "dma_device_id": "system", 00:18:01.977 "dma_device_type": 1 00:18:01.977 }, 00:18:01.977 { 00:18:01.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.977 "dma_device_type": 2 00:18:01.977 } 00:18:01.977 ], 00:18:01.977 "driver_specific": {} 00:18:01.977 } 00:18:01.977 ] 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.977 10:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.977 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.977 "name": "Existed_Raid", 00:18:01.977 "uuid": "a121c7ed-7eeb-4ef0-a44a-a7ad70864289", 00:18:01.977 "strip_size_kb": 0, 00:18:01.977 "state": "online", 00:18:01.977 "raid_level": "raid1", 00:18:01.977 "superblock": true, 00:18:01.977 "num_base_bdevs": 2, 00:18:01.977 "num_base_bdevs_discovered": 2, 00:18:01.977 "num_base_bdevs_operational": 2, 00:18:01.977 "base_bdevs_list": [ 00:18:01.977 { 00:18:01.977 "name": "BaseBdev1", 00:18:01.977 "uuid": "7648ec71-41b9-4c47-8c86-ed9b85581753", 00:18:01.977 "is_configured": true, 00:18:01.977 "data_offset": 256, 00:18:01.977 "data_size": 7936 00:18:01.977 }, 00:18:01.977 { 00:18:01.977 "name": "BaseBdev2", 00:18:01.977 "uuid": "4cfd47cc-a5c8-4f56-8306-f46403be55e7", 00:18:01.977 "is_configured": true, 00:18:01.977 "data_offset": 256, 00:18:01.977 "data_size": 7936 00:18:01.977 } 00:18:01.977 ] 00:18:01.977 }' 00:18:01.977 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.977 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.570 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:02.570 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:02.570 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:02.570 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:02.570 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:02.570 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:02.570 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:02.570 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:02.570 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.570 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.570 [2024-11-15 10:46:23.523656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.570 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.570 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:02.570 "name": "Existed_Raid", 00:18:02.570 "aliases": [ 00:18:02.570 "a121c7ed-7eeb-4ef0-a44a-a7ad70864289" 00:18:02.570 ], 00:18:02.570 "product_name": "Raid Volume", 00:18:02.570 "block_size": 4096, 00:18:02.570 "num_blocks": 7936, 00:18:02.570 "uuid": "a121c7ed-7eeb-4ef0-a44a-a7ad70864289", 00:18:02.570 "assigned_rate_limits": { 00:18:02.570 "rw_ios_per_sec": 0, 00:18:02.570 "rw_mbytes_per_sec": 0, 00:18:02.570 "r_mbytes_per_sec": 0, 00:18:02.570 "w_mbytes_per_sec": 0 00:18:02.570 }, 00:18:02.570 "claimed": false, 00:18:02.570 "zoned": false, 00:18:02.570 "supported_io_types": { 00:18:02.570 "read": true, 00:18:02.570 "write": true, 00:18:02.570 "unmap": false, 00:18:02.570 "flush": false, 00:18:02.570 "reset": true, 00:18:02.570 "nvme_admin": false, 00:18:02.570 "nvme_io": false, 00:18:02.570 "nvme_io_md": false, 00:18:02.570 "write_zeroes": true, 00:18:02.570 "zcopy": false, 00:18:02.570 "get_zone_info": false, 00:18:02.570 "zone_management": false, 00:18:02.570 "zone_append": false, 00:18:02.570 "compare": false, 00:18:02.570 "compare_and_write": false, 00:18:02.570 "abort": false, 00:18:02.570 "seek_hole": false, 00:18:02.570 "seek_data": false, 00:18:02.570 "copy": false, 00:18:02.570 "nvme_iov_md": false 00:18:02.570 }, 00:18:02.570 "memory_domains": [ 00:18:02.570 { 00:18:02.570 "dma_device_id": "system", 00:18:02.570 "dma_device_type": 1 00:18:02.570 }, 00:18:02.570 { 00:18:02.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.570 "dma_device_type": 2 00:18:02.570 }, 00:18:02.570 { 00:18:02.570 "dma_device_id": "system", 00:18:02.570 "dma_device_type": 1 00:18:02.570 }, 00:18:02.570 { 00:18:02.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.570 "dma_device_type": 2 00:18:02.570 } 00:18:02.570 ], 00:18:02.570 "driver_specific": { 00:18:02.570 "raid": { 00:18:02.570 "uuid": "a121c7ed-7eeb-4ef0-a44a-a7ad70864289", 00:18:02.571 "strip_size_kb": 0, 00:18:02.571 "state": "online", 00:18:02.571 "raid_level": "raid1", 00:18:02.571 "superblock": true, 00:18:02.571 "num_base_bdevs": 2, 00:18:02.571 "num_base_bdevs_discovered": 2, 00:18:02.571 "num_base_bdevs_operational": 2, 00:18:02.571 "base_bdevs_list": [ 00:18:02.571 { 00:18:02.571 "name": "BaseBdev1", 00:18:02.571 "uuid": "7648ec71-41b9-4c47-8c86-ed9b85581753", 00:18:02.571 "is_configured": true, 00:18:02.571 "data_offset": 256, 00:18:02.571 "data_size": 7936 00:18:02.571 }, 00:18:02.571 { 00:18:02.571 "name": "BaseBdev2", 00:18:02.571 "uuid": "4cfd47cc-a5c8-4f56-8306-f46403be55e7", 00:18:02.571 "is_configured": true, 00:18:02.571 "data_offset": 256, 00:18:02.571 "data_size": 7936 00:18:02.571 } 00:18:02.571 ] 00:18:02.571 } 00:18:02.571 } 00:18:02.571 }' 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:02.571 BaseBdev2' 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.571 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.829 [2024-11-15 10:46:23.779400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.829 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.830 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.830 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.830 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.830 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.830 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.830 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.830 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.830 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.830 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.830 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.830 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.830 "name": "Existed_Raid", 00:18:02.830 "uuid": "a121c7ed-7eeb-4ef0-a44a-a7ad70864289", 00:18:02.830 "strip_size_kb": 0, 00:18:02.830 "state": "online", 00:18:02.830 "raid_level": "raid1", 00:18:02.830 "superblock": true, 00:18:02.830 "num_base_bdevs": 2, 00:18:02.830 "num_base_bdevs_discovered": 1, 00:18:02.830 "num_base_bdevs_operational": 1, 00:18:02.830 "base_bdevs_list": [ 00:18:02.830 { 00:18:02.830 "name": null, 00:18:02.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.830 "is_configured": false, 00:18:02.830 "data_offset": 0, 00:18:02.830 "data_size": 7936 00:18:02.830 }, 00:18:02.830 { 00:18:02.830 "name": "BaseBdev2", 00:18:02.830 "uuid": "4cfd47cc-a5c8-4f56-8306-f46403be55e7", 00:18:02.830 "is_configured": true, 00:18:02.830 "data_offset": 256, 00:18:02.830 "data_size": 7936 00:18:02.830 } 00:18:02.830 ] 00:18:02.830 }' 00:18:02.830 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.830 10:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.397 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:03.397 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:03.397 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.397 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:03.397 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.397 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.397 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.397 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:03.397 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.397 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:03.397 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.397 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.397 [2024-11-15 10:46:24.470108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:03.397 [2024-11-15 10:46:24.470269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.663 [2024-11-15 10:46:24.559399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.663 [2024-11-15 10:46:24.559482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.663 [2024-11-15 10:46:24.559523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86293 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86293 ']' 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86293 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86293 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.663 killing process with pid 86293 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86293' 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86293 00:18:03.663 [2024-11-15 10:46:24.646994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.663 10:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86293 00:18:03.664 [2024-11-15 10:46:24.661782] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.604 10:46:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:04.604 00:18:04.604 real 0m5.570s 00:18:04.604 user 0m8.416s 00:18:04.604 sys 0m0.816s 00:18:04.604 10:46:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.604 ************************************ 00:18:04.604 END TEST raid_state_function_test_sb_4k 00:18:04.604 ************************************ 00:18:04.604 10:46:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.604 10:46:25 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:04.604 10:46:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:04.604 10:46:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.604 10:46:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.863 ************************************ 00:18:04.863 START TEST raid_superblock_test_4k 00:18:04.863 ************************************ 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86554 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86554 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86554 ']' 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.863 10:46:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.863 [2024-11-15 10:46:25.878518] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:18:04.863 [2024-11-15 10:46:25.878756] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86554 ] 00:18:05.120 [2024-11-15 10:46:26.070038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.120 [2024-11-15 10:46:26.225439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.378 [2024-11-15 10:46:26.477520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.378 [2024-11-15 10:46:26.477559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.943 malloc1 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.943 [2024-11-15 10:46:26.866007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:05.943 [2024-11-15 10:46:26.866086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.943 [2024-11-15 10:46:26.866118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:05.943 [2024-11-15 10:46:26.866133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.943 [2024-11-15 10:46:26.868890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.943 [2024-11-15 10:46:26.868935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:05.943 pt1 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.943 malloc2 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:05.943 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.944 [2024-11-15 10:46:26.921940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:05.944 [2024-11-15 10:46:26.922009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.944 [2024-11-15 10:46:26.922040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:05.944 [2024-11-15 10:46:26.922055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.944 [2024-11-15 10:46:26.924835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.944 [2024-11-15 10:46:26.924885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:05.944 pt2 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.944 [2024-11-15 10:46:26.930019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:05.944 [2024-11-15 10:46:26.932372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:05.944 [2024-11-15 10:46:26.932615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:05.944 [2024-11-15 10:46:26.932659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:05.944 [2024-11-15 10:46:26.932971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:05.944 [2024-11-15 10:46:26.933182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:05.944 [2024-11-15 10:46:26.933217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:05.944 [2024-11-15 10:46:26.933396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.944 "name": "raid_bdev1", 00:18:05.944 "uuid": "4f5b1e3f-e549-49c9-8e23-323591579298", 00:18:05.944 "strip_size_kb": 0, 00:18:05.944 "state": "online", 00:18:05.944 "raid_level": "raid1", 00:18:05.944 "superblock": true, 00:18:05.944 "num_base_bdevs": 2, 00:18:05.944 "num_base_bdevs_discovered": 2, 00:18:05.944 "num_base_bdevs_operational": 2, 00:18:05.944 "base_bdevs_list": [ 00:18:05.944 { 00:18:05.944 "name": "pt1", 00:18:05.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.944 "is_configured": true, 00:18:05.944 "data_offset": 256, 00:18:05.944 "data_size": 7936 00:18:05.944 }, 00:18:05.944 { 00:18:05.944 "name": "pt2", 00:18:05.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.944 "is_configured": true, 00:18:05.944 "data_offset": 256, 00:18:05.944 "data_size": 7936 00:18:05.944 } 00:18:05.944 ] 00:18:05.944 }' 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.944 10:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:06.510 [2024-11-15 10:46:27.422470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:06.510 "name": "raid_bdev1", 00:18:06.510 "aliases": [ 00:18:06.510 "4f5b1e3f-e549-49c9-8e23-323591579298" 00:18:06.510 ], 00:18:06.510 "product_name": "Raid Volume", 00:18:06.510 "block_size": 4096, 00:18:06.510 "num_blocks": 7936, 00:18:06.510 "uuid": "4f5b1e3f-e549-49c9-8e23-323591579298", 00:18:06.510 "assigned_rate_limits": { 00:18:06.510 "rw_ios_per_sec": 0, 00:18:06.510 "rw_mbytes_per_sec": 0, 00:18:06.510 "r_mbytes_per_sec": 0, 00:18:06.510 "w_mbytes_per_sec": 0 00:18:06.510 }, 00:18:06.510 "claimed": false, 00:18:06.510 "zoned": false, 00:18:06.510 "supported_io_types": { 00:18:06.510 "read": true, 00:18:06.510 "write": true, 00:18:06.510 "unmap": false, 00:18:06.510 "flush": false, 00:18:06.510 "reset": true, 00:18:06.510 "nvme_admin": false, 00:18:06.510 "nvme_io": false, 00:18:06.510 "nvme_io_md": false, 00:18:06.510 "write_zeroes": true, 00:18:06.510 "zcopy": false, 00:18:06.510 "get_zone_info": false, 00:18:06.510 "zone_management": false, 00:18:06.510 "zone_append": false, 00:18:06.510 "compare": false, 00:18:06.510 "compare_and_write": false, 00:18:06.510 "abort": false, 00:18:06.510 "seek_hole": false, 00:18:06.510 "seek_data": false, 00:18:06.510 "copy": false, 00:18:06.510 "nvme_iov_md": false 00:18:06.510 }, 00:18:06.510 "memory_domains": [ 00:18:06.510 { 00:18:06.510 "dma_device_id": "system", 00:18:06.510 "dma_device_type": 1 00:18:06.510 }, 00:18:06.510 { 00:18:06.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.510 "dma_device_type": 2 00:18:06.510 }, 00:18:06.510 { 00:18:06.510 "dma_device_id": "system", 00:18:06.510 "dma_device_type": 1 00:18:06.510 }, 00:18:06.510 { 00:18:06.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.510 "dma_device_type": 2 00:18:06.510 } 00:18:06.510 ], 00:18:06.510 "driver_specific": { 00:18:06.510 "raid": { 00:18:06.510 "uuid": "4f5b1e3f-e549-49c9-8e23-323591579298", 00:18:06.510 "strip_size_kb": 0, 00:18:06.510 "state": "online", 00:18:06.510 "raid_level": "raid1", 00:18:06.510 "superblock": true, 00:18:06.510 "num_base_bdevs": 2, 00:18:06.510 "num_base_bdevs_discovered": 2, 00:18:06.510 "num_base_bdevs_operational": 2, 00:18:06.510 "base_bdevs_list": [ 00:18:06.510 { 00:18:06.510 "name": "pt1", 00:18:06.510 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:06.510 "is_configured": true, 00:18:06.510 "data_offset": 256, 00:18:06.510 "data_size": 7936 00:18:06.510 }, 00:18:06.510 { 00:18:06.510 "name": "pt2", 00:18:06.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.510 "is_configured": true, 00:18:06.510 "data_offset": 256, 00:18:06.510 "data_size": 7936 00:18:06.510 } 00:18:06.510 ] 00:18:06.510 } 00:18:06.510 } 00:18:06.510 }' 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:06.510 pt2' 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:06.510 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.511 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:06.511 [2024-11-15 10:46:27.666508] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4f5b1e3f-e549-49c9-8e23-323591579298 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 4f5b1e3f-e549-49c9-8e23-323591579298 ']' 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.770 [2024-11-15 10:46:27.714140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.770 [2024-11-15 10:46:27.714168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.770 [2024-11-15 10:46:27.714256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.770 [2024-11-15 10:46:27.714329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.770 [2024-11-15 10:46:27.714348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.770 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.770 [2024-11-15 10:46:27.838264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:06.770 [2024-11-15 10:46:27.840813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:06.770 [2024-11-15 10:46:27.840915] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:06.770 [2024-11-15 10:46:27.840994] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:06.770 [2024-11-15 10:46:27.841020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.771 [2024-11-15 10:46:27.841035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:06.771 request: 00:18:06.771 { 00:18:06.771 "name": "raid_bdev1", 00:18:06.771 "raid_level": "raid1", 00:18:06.771 "base_bdevs": [ 00:18:06.771 "malloc1", 00:18:06.771 "malloc2" 00:18:06.771 ], 00:18:06.771 "superblock": false, 00:18:06.771 "method": "bdev_raid_create", 00:18:06.771 "req_id": 1 00:18:06.771 } 00:18:06.771 Got JSON-RPC error response 00:18:06.771 response: 00:18:06.771 { 00:18:06.771 "code": -17, 00:18:06.771 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:06.771 } 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.771 [2024-11-15 10:46:27.906260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:06.771 [2024-11-15 10:46:27.906340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.771 [2024-11-15 10:46:27.906371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:06.771 [2024-11-15 10:46:27.906388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.771 [2024-11-15 10:46:27.909308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.771 [2024-11-15 10:46:27.909370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:06.771 [2024-11-15 10:46:27.909495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:06.771 [2024-11-15 10:46:27.909602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:06.771 pt1 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.771 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.044 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.044 "name": "raid_bdev1", 00:18:07.044 "uuid": "4f5b1e3f-e549-49c9-8e23-323591579298", 00:18:07.044 "strip_size_kb": 0, 00:18:07.044 "state": "configuring", 00:18:07.044 "raid_level": "raid1", 00:18:07.044 "superblock": true, 00:18:07.044 "num_base_bdevs": 2, 00:18:07.044 "num_base_bdevs_discovered": 1, 00:18:07.044 "num_base_bdevs_operational": 2, 00:18:07.044 "base_bdevs_list": [ 00:18:07.044 { 00:18:07.044 "name": "pt1", 00:18:07.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:07.044 "is_configured": true, 00:18:07.044 "data_offset": 256, 00:18:07.044 "data_size": 7936 00:18:07.044 }, 00:18:07.044 { 00:18:07.044 "name": null, 00:18:07.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.044 "is_configured": false, 00:18:07.044 "data_offset": 256, 00:18:07.044 "data_size": 7936 00:18:07.044 } 00:18:07.044 ] 00:18:07.044 }' 00:18:07.044 10:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.044 10:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.329 [2024-11-15 10:46:28.406433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:07.329 [2024-11-15 10:46:28.406563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.329 [2024-11-15 10:46:28.406598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:07.329 [2024-11-15 10:46:28.406616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.329 [2024-11-15 10:46:28.407200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.329 [2024-11-15 10:46:28.407241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:07.329 [2024-11-15 10:46:28.407341] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:07.329 [2024-11-15 10:46:28.407377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:07.329 [2024-11-15 10:46:28.407540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:07.329 [2024-11-15 10:46:28.407561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:07.329 [2024-11-15 10:46:28.407852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:07.329 [2024-11-15 10:46:28.408059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:07.329 [2024-11-15 10:46:28.408076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:07.329 [2024-11-15 10:46:28.408245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.329 pt2 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.329 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.329 "name": "raid_bdev1", 00:18:07.329 "uuid": "4f5b1e3f-e549-49c9-8e23-323591579298", 00:18:07.329 "strip_size_kb": 0, 00:18:07.329 "state": "online", 00:18:07.329 "raid_level": "raid1", 00:18:07.329 "superblock": true, 00:18:07.329 "num_base_bdevs": 2, 00:18:07.329 "num_base_bdevs_discovered": 2, 00:18:07.329 "num_base_bdevs_operational": 2, 00:18:07.329 "base_bdevs_list": [ 00:18:07.329 { 00:18:07.329 "name": "pt1", 00:18:07.329 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:07.329 "is_configured": true, 00:18:07.329 "data_offset": 256, 00:18:07.329 "data_size": 7936 00:18:07.329 }, 00:18:07.329 { 00:18:07.329 "name": "pt2", 00:18:07.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.329 "is_configured": true, 00:18:07.329 "data_offset": 256, 00:18:07.330 "data_size": 7936 00:18:07.330 } 00:18:07.330 ] 00:18:07.330 }' 00:18:07.330 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.330 10:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.896 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:07.896 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:07.896 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:07.896 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:07.896 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:07.896 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:07.896 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:07.896 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:07.896 10:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.896 10:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.896 [2024-11-15 10:46:28.934912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.896 10:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.896 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:07.896 "name": "raid_bdev1", 00:18:07.896 "aliases": [ 00:18:07.896 "4f5b1e3f-e549-49c9-8e23-323591579298" 00:18:07.896 ], 00:18:07.896 "product_name": "Raid Volume", 00:18:07.896 "block_size": 4096, 00:18:07.896 "num_blocks": 7936, 00:18:07.896 "uuid": "4f5b1e3f-e549-49c9-8e23-323591579298", 00:18:07.896 "assigned_rate_limits": { 00:18:07.896 "rw_ios_per_sec": 0, 00:18:07.896 "rw_mbytes_per_sec": 0, 00:18:07.896 "r_mbytes_per_sec": 0, 00:18:07.896 "w_mbytes_per_sec": 0 00:18:07.896 }, 00:18:07.896 "claimed": false, 00:18:07.896 "zoned": false, 00:18:07.896 "supported_io_types": { 00:18:07.896 "read": true, 00:18:07.896 "write": true, 00:18:07.896 "unmap": false, 00:18:07.896 "flush": false, 00:18:07.896 "reset": true, 00:18:07.896 "nvme_admin": false, 00:18:07.896 "nvme_io": false, 00:18:07.896 "nvme_io_md": false, 00:18:07.896 "write_zeroes": true, 00:18:07.896 "zcopy": false, 00:18:07.896 "get_zone_info": false, 00:18:07.896 "zone_management": false, 00:18:07.896 "zone_append": false, 00:18:07.896 "compare": false, 00:18:07.896 "compare_and_write": false, 00:18:07.896 "abort": false, 00:18:07.896 "seek_hole": false, 00:18:07.896 "seek_data": false, 00:18:07.896 "copy": false, 00:18:07.897 "nvme_iov_md": false 00:18:07.897 }, 00:18:07.897 "memory_domains": [ 00:18:07.897 { 00:18:07.897 "dma_device_id": "system", 00:18:07.897 "dma_device_type": 1 00:18:07.897 }, 00:18:07.897 { 00:18:07.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.897 "dma_device_type": 2 00:18:07.897 }, 00:18:07.897 { 00:18:07.897 "dma_device_id": "system", 00:18:07.897 "dma_device_type": 1 00:18:07.897 }, 00:18:07.897 { 00:18:07.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.897 "dma_device_type": 2 00:18:07.897 } 00:18:07.897 ], 00:18:07.897 "driver_specific": { 00:18:07.897 "raid": { 00:18:07.897 "uuid": "4f5b1e3f-e549-49c9-8e23-323591579298", 00:18:07.897 "strip_size_kb": 0, 00:18:07.897 "state": "online", 00:18:07.897 "raid_level": "raid1", 00:18:07.897 "superblock": true, 00:18:07.897 "num_base_bdevs": 2, 00:18:07.897 "num_base_bdevs_discovered": 2, 00:18:07.897 "num_base_bdevs_operational": 2, 00:18:07.897 "base_bdevs_list": [ 00:18:07.897 { 00:18:07.897 "name": "pt1", 00:18:07.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:07.897 "is_configured": true, 00:18:07.897 "data_offset": 256, 00:18:07.897 "data_size": 7936 00:18:07.897 }, 00:18:07.897 { 00:18:07.897 "name": "pt2", 00:18:07.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.897 "is_configured": true, 00:18:07.897 "data_offset": 256, 00:18:07.897 "data_size": 7936 00:18:07.897 } 00:18:07.897 ] 00:18:07.897 } 00:18:07.897 } 00:18:07.897 }' 00:18:07.897 10:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:07.897 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:07.897 pt2' 00:18:07.897 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.155 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:08.156 [2024-11-15 10:46:29.186967] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 4f5b1e3f-e549-49c9-8e23-323591579298 '!=' 4f5b1e3f-e549-49c9-8e23-323591579298 ']' 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.156 [2024-11-15 10:46:29.238753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.156 "name": "raid_bdev1", 00:18:08.156 "uuid": "4f5b1e3f-e549-49c9-8e23-323591579298", 00:18:08.156 "strip_size_kb": 0, 00:18:08.156 "state": "online", 00:18:08.156 "raid_level": "raid1", 00:18:08.156 "superblock": true, 00:18:08.156 "num_base_bdevs": 2, 00:18:08.156 "num_base_bdevs_discovered": 1, 00:18:08.156 "num_base_bdevs_operational": 1, 00:18:08.156 "base_bdevs_list": [ 00:18:08.156 { 00:18:08.156 "name": null, 00:18:08.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.156 "is_configured": false, 00:18:08.156 "data_offset": 0, 00:18:08.156 "data_size": 7936 00:18:08.156 }, 00:18:08.156 { 00:18:08.156 "name": "pt2", 00:18:08.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.156 "is_configured": true, 00:18:08.156 "data_offset": 256, 00:18:08.156 "data_size": 7936 00:18:08.156 } 00:18:08.156 ] 00:18:08.156 }' 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.156 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.722 [2024-11-15 10:46:29.762839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:08.722 [2024-11-15 10:46:29.762877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.722 [2024-11-15 10:46:29.762972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.722 [2024-11-15 10:46:29.763040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.722 [2024-11-15 10:46:29.763058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.722 [2024-11-15 10:46:29.834865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:08.722 [2024-11-15 10:46:29.834969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.722 [2024-11-15 10:46:29.835004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:08.722 [2024-11-15 10:46:29.835024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.722 [2024-11-15 10:46:29.837937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.722 [2024-11-15 10:46:29.837989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:08.722 [2024-11-15 10:46:29.838090] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:08.722 [2024-11-15 10:46:29.838154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:08.722 [2024-11-15 10:46:29.838280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:08.722 [2024-11-15 10:46:29.838320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:08.722 [2024-11-15 10:46:29.838627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:08.722 [2024-11-15 10:46:29.838907] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:08.722 [2024-11-15 10:46:29.838936] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:08.722 [2024-11-15 10:46:29.839163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.722 pt2 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.722 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.980 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.980 "name": "raid_bdev1", 00:18:08.980 "uuid": "4f5b1e3f-e549-49c9-8e23-323591579298", 00:18:08.980 "strip_size_kb": 0, 00:18:08.980 "state": "online", 00:18:08.980 "raid_level": "raid1", 00:18:08.980 "superblock": true, 00:18:08.980 "num_base_bdevs": 2, 00:18:08.980 "num_base_bdevs_discovered": 1, 00:18:08.980 "num_base_bdevs_operational": 1, 00:18:08.980 "base_bdevs_list": [ 00:18:08.980 { 00:18:08.980 "name": null, 00:18:08.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.980 "is_configured": false, 00:18:08.980 "data_offset": 256, 00:18:08.980 "data_size": 7936 00:18:08.980 }, 00:18:08.980 { 00:18:08.980 "name": "pt2", 00:18:08.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.980 "is_configured": true, 00:18:08.980 "data_offset": 256, 00:18:08.980 "data_size": 7936 00:18:08.980 } 00:18:08.980 ] 00:18:08.980 }' 00:18:08.980 10:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.980 10:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.238 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:09.238 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.238 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.238 [2024-11-15 10:46:30.359219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.238 [2024-11-15 10:46:30.359260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.238 [2024-11-15 10:46:30.359358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.238 [2024-11-15 10:46:30.359425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.238 [2024-11-15 10:46:30.359439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:09.238 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.238 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.238 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.238 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.238 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:09.238 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.495 [2024-11-15 10:46:30.419261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:09.495 [2024-11-15 10:46:30.419335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.495 [2024-11-15 10:46:30.419367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:09.495 [2024-11-15 10:46:30.419382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.495 [2024-11-15 10:46:30.422292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.495 [2024-11-15 10:46:30.422338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:09.495 [2024-11-15 10:46:30.422444] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:09.495 [2024-11-15 10:46:30.422531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:09.495 [2024-11-15 10:46:30.422716] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:09.495 [2024-11-15 10:46:30.422734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.495 [2024-11-15 10:46:30.422756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:09.495 [2024-11-15 10:46:30.422831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:09.495 [2024-11-15 10:46:30.422936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:09.495 [2024-11-15 10:46:30.422951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:09.495 [2024-11-15 10:46:30.423257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:09.495 [2024-11-15 10:46:30.423452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:09.495 [2024-11-15 10:46:30.423472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:09.495 [2024-11-15 10:46:30.423718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.495 pt1 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.495 "name": "raid_bdev1", 00:18:09.495 "uuid": "4f5b1e3f-e549-49c9-8e23-323591579298", 00:18:09.495 "strip_size_kb": 0, 00:18:09.495 "state": "online", 00:18:09.495 "raid_level": "raid1", 00:18:09.495 "superblock": true, 00:18:09.495 "num_base_bdevs": 2, 00:18:09.495 "num_base_bdevs_discovered": 1, 00:18:09.495 "num_base_bdevs_operational": 1, 00:18:09.495 "base_bdevs_list": [ 00:18:09.495 { 00:18:09.495 "name": null, 00:18:09.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.495 "is_configured": false, 00:18:09.495 "data_offset": 256, 00:18:09.495 "data_size": 7936 00:18:09.495 }, 00:18:09.495 { 00:18:09.495 "name": "pt2", 00:18:09.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.495 "is_configured": true, 00:18:09.495 "data_offset": 256, 00:18:09.495 "data_size": 7936 00:18:09.495 } 00:18:09.495 ] 00:18:09.495 }' 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.495 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.058 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:10.058 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:10.058 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.058 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.058 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.058 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:10.058 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:10.058 10:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:10.058 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.058 10:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.058 [2024-11-15 10:46:30.992045] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.058 10:46:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.058 10:46:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 4f5b1e3f-e549-49c9-8e23-323591579298 '!=' 4f5b1e3f-e549-49c9-8e23-323591579298 ']' 00:18:10.058 10:46:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86554 00:18:10.058 10:46:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86554 ']' 00:18:10.058 10:46:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86554 00:18:10.058 10:46:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:18:10.058 10:46:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.058 10:46:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86554 00:18:10.058 10:46:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.058 10:46:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.058 10:46:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86554' 00:18:10.058 killing process with pid 86554 00:18:10.058 10:46:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86554 00:18:10.058 [2024-11-15 10:46:31.065769] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:10.058 10:46:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86554 00:18:10.058 [2024-11-15 10:46:31.065882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.058 [2024-11-15 10:46:31.065952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.058 [2024-11-15 10:46:31.065974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:10.316 [2024-11-15 10:46:31.248464] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.249 10:46:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:11.249 00:18:11.249 real 0m6.510s 00:18:11.249 user 0m10.279s 00:18:11.249 sys 0m0.956s 00:18:11.249 10:46:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.249 10:46:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.249 ************************************ 00:18:11.249 END TEST raid_superblock_test_4k 00:18:11.249 ************************************ 00:18:11.249 10:46:32 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:11.249 10:46:32 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:11.249 10:46:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:11.249 10:46:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.249 10:46:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.249 ************************************ 00:18:11.249 START TEST raid_rebuild_test_sb_4k 00:18:11.249 ************************************ 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86877 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86877 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86877 ']' 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.249 10:46:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.508 [2024-11-15 10:46:32.448462] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:18:11.508 [2024-11-15 10:46:32.448669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86877 ] 00:18:11.508 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:11.508 Zero copy mechanism will not be used. 00:18:11.508 [2024-11-15 10:46:32.638887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.766 [2024-11-15 10:46:32.835038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.023 [2024-11-15 10:46:33.056042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.023 [2024-11-15 10:46:33.056121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.280 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.280 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:12.280 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:12.280 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:12.280 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.280 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.538 BaseBdev1_malloc 00:18:12.538 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.538 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:12.538 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.538 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.538 [2024-11-15 10:46:33.461999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:12.538 [2024-11-15 10:46:33.462083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.538 [2024-11-15 10:46:33.462113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:12.538 [2024-11-15 10:46:33.462131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.538 [2024-11-15 10:46:33.464937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.538 [2024-11-15 10:46:33.464990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:12.538 BaseBdev1 00:18:12.538 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.538 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:12.538 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:12.538 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.538 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.538 BaseBdev2_malloc 00:18:12.538 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.538 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:12.538 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.538 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.538 [2024-11-15 10:46:33.514291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:12.538 [2024-11-15 10:46:33.514365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.538 [2024-11-15 10:46:33.514393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:12.538 [2024-11-15 10:46:33.514413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.539 [2024-11-15 10:46:33.517120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.539 [2024-11-15 10:46:33.517171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:12.539 BaseBdev2 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.539 spare_malloc 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.539 spare_delay 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.539 [2024-11-15 10:46:33.590930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:12.539 [2024-11-15 10:46:33.591004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.539 [2024-11-15 10:46:33.591036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:12.539 [2024-11-15 10:46:33.591054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.539 [2024-11-15 10:46:33.593899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.539 [2024-11-15 10:46:33.593951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:12.539 spare 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.539 [2024-11-15 10:46:33.599008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.539 [2024-11-15 10:46:33.601436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:12.539 [2024-11-15 10:46:33.601690] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:12.539 [2024-11-15 10:46:33.601718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:12.539 [2024-11-15 10:46:33.602028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:12.539 [2024-11-15 10:46:33.602260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:12.539 [2024-11-15 10:46:33.602291] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:12.539 [2024-11-15 10:46:33.602475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.539 "name": "raid_bdev1", 00:18:12.539 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:12.539 "strip_size_kb": 0, 00:18:12.539 "state": "online", 00:18:12.539 "raid_level": "raid1", 00:18:12.539 "superblock": true, 00:18:12.539 "num_base_bdevs": 2, 00:18:12.539 "num_base_bdevs_discovered": 2, 00:18:12.539 "num_base_bdevs_operational": 2, 00:18:12.539 "base_bdevs_list": [ 00:18:12.539 { 00:18:12.539 "name": "BaseBdev1", 00:18:12.539 "uuid": "6f5ce5cb-0da3-58b8-8866-09dfe5418179", 00:18:12.539 "is_configured": true, 00:18:12.539 "data_offset": 256, 00:18:12.539 "data_size": 7936 00:18:12.539 }, 00:18:12.539 { 00:18:12.539 "name": "BaseBdev2", 00:18:12.539 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:12.539 "is_configured": true, 00:18:12.539 "data_offset": 256, 00:18:12.539 "data_size": 7936 00:18:12.539 } 00:18:12.539 ] 00:18:12.539 }' 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.539 10:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.106 [2024-11-15 10:46:34.111481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:13.106 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:13.364 [2024-11-15 10:46:34.455290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:13.364 /dev/nbd0 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:13.364 1+0 records in 00:18:13.364 1+0 records out 00:18:13.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401665 s, 10.2 MB/s 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:13.364 10:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:14.297 7936+0 records in 00:18:14.297 7936+0 records out 00:18:14.297 32505856 bytes (33 MB, 31 MiB) copied, 0.899195 s, 36.1 MB/s 00:18:14.297 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:14.297 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:14.297 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:14.297 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:14.297 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:14.297 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:14.297 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:14.863 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:14.863 [2024-11-15 10:46:35.720410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.863 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:14.863 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:14.863 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.863 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.863 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:14.863 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:14.863 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.863 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:14.863 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.863 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.864 [2024-11-15 10:46:35.732541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.864 "name": "raid_bdev1", 00:18:14.864 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:14.864 "strip_size_kb": 0, 00:18:14.864 "state": "online", 00:18:14.864 "raid_level": "raid1", 00:18:14.864 "superblock": true, 00:18:14.864 "num_base_bdevs": 2, 00:18:14.864 "num_base_bdevs_discovered": 1, 00:18:14.864 "num_base_bdevs_operational": 1, 00:18:14.864 "base_bdevs_list": [ 00:18:14.864 { 00:18:14.864 "name": null, 00:18:14.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.864 "is_configured": false, 00:18:14.864 "data_offset": 0, 00:18:14.864 "data_size": 7936 00:18:14.864 }, 00:18:14.864 { 00:18:14.864 "name": "BaseBdev2", 00:18:14.864 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:14.864 "is_configured": true, 00:18:14.864 "data_offset": 256, 00:18:14.864 "data_size": 7936 00:18:14.864 } 00:18:14.864 ] 00:18:14.864 }' 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.864 10:46:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.122 10:46:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:15.122 10:46:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.122 10:46:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.122 [2024-11-15 10:46:36.196709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.122 [2024-11-15 10:46:36.213420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:15.122 10:46:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.122 10:46:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:15.122 [2024-11-15 10:46:36.215857] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.501 "name": "raid_bdev1", 00:18:16.501 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:16.501 "strip_size_kb": 0, 00:18:16.501 "state": "online", 00:18:16.501 "raid_level": "raid1", 00:18:16.501 "superblock": true, 00:18:16.501 "num_base_bdevs": 2, 00:18:16.501 "num_base_bdevs_discovered": 2, 00:18:16.501 "num_base_bdevs_operational": 2, 00:18:16.501 "process": { 00:18:16.501 "type": "rebuild", 00:18:16.501 "target": "spare", 00:18:16.501 "progress": { 00:18:16.501 "blocks": 2560, 00:18:16.501 "percent": 32 00:18:16.501 } 00:18:16.501 }, 00:18:16.501 "base_bdevs_list": [ 00:18:16.501 { 00:18:16.501 "name": "spare", 00:18:16.501 "uuid": "29a83ff3-c45f-5774-9654-792d99966308", 00:18:16.501 "is_configured": true, 00:18:16.501 "data_offset": 256, 00:18:16.501 "data_size": 7936 00:18:16.501 }, 00:18:16.501 { 00:18:16.501 "name": "BaseBdev2", 00:18:16.501 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:16.501 "is_configured": true, 00:18:16.501 "data_offset": 256, 00:18:16.501 "data_size": 7936 00:18:16.501 } 00:18:16.501 ] 00:18:16.501 }' 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.501 [2024-11-15 10:46:37.377134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.501 [2024-11-15 10:46:37.425173] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:16.501 [2024-11-15 10:46:37.425310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.501 [2024-11-15 10:46:37.425337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.501 [2024-11-15 10:46:37.425354] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.501 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.501 "name": "raid_bdev1", 00:18:16.501 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:16.501 "strip_size_kb": 0, 00:18:16.501 "state": "online", 00:18:16.501 "raid_level": "raid1", 00:18:16.501 "superblock": true, 00:18:16.501 "num_base_bdevs": 2, 00:18:16.501 "num_base_bdevs_discovered": 1, 00:18:16.501 "num_base_bdevs_operational": 1, 00:18:16.501 "base_bdevs_list": [ 00:18:16.501 { 00:18:16.501 "name": null, 00:18:16.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.501 "is_configured": false, 00:18:16.501 "data_offset": 0, 00:18:16.501 "data_size": 7936 00:18:16.501 }, 00:18:16.501 { 00:18:16.501 "name": "BaseBdev2", 00:18:16.501 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:16.501 "is_configured": true, 00:18:16.502 "data_offset": 256, 00:18:16.502 "data_size": 7936 00:18:16.502 } 00:18:16.502 ] 00:18:16.502 }' 00:18:16.502 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.502 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.081 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.081 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.081 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.081 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.081 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.081 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.081 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.081 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.081 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.081 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.081 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.081 "name": "raid_bdev1", 00:18:17.081 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:17.081 "strip_size_kb": 0, 00:18:17.081 "state": "online", 00:18:17.081 "raid_level": "raid1", 00:18:17.081 "superblock": true, 00:18:17.081 "num_base_bdevs": 2, 00:18:17.081 "num_base_bdevs_discovered": 1, 00:18:17.081 "num_base_bdevs_operational": 1, 00:18:17.081 "base_bdevs_list": [ 00:18:17.081 { 00:18:17.081 "name": null, 00:18:17.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.081 "is_configured": false, 00:18:17.081 "data_offset": 0, 00:18:17.081 "data_size": 7936 00:18:17.081 }, 00:18:17.081 { 00:18:17.081 "name": "BaseBdev2", 00:18:17.081 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:17.081 "is_configured": true, 00:18:17.081 "data_offset": 256, 00:18:17.081 "data_size": 7936 00:18:17.081 } 00:18:17.081 ] 00:18:17.081 }' 00:18:17.081 10:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.081 10:46:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.081 10:46:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.081 10:46:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.081 10:46:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:17.081 10:46:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.081 10:46:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.081 [2024-11-15 10:46:38.098939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.081 [2024-11-15 10:46:38.115061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:17.081 10:46:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.081 10:46:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:17.081 [2024-11-15 10:46:38.117700] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.017 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.017 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.017 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.017 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.017 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.017 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.017 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.017 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.017 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.017 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.017 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.017 "name": "raid_bdev1", 00:18:18.017 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:18.017 "strip_size_kb": 0, 00:18:18.017 "state": "online", 00:18:18.017 "raid_level": "raid1", 00:18:18.017 "superblock": true, 00:18:18.017 "num_base_bdevs": 2, 00:18:18.017 "num_base_bdevs_discovered": 2, 00:18:18.017 "num_base_bdevs_operational": 2, 00:18:18.017 "process": { 00:18:18.017 "type": "rebuild", 00:18:18.017 "target": "spare", 00:18:18.017 "progress": { 00:18:18.017 "blocks": 2560, 00:18:18.017 "percent": 32 00:18:18.017 } 00:18:18.017 }, 00:18:18.017 "base_bdevs_list": [ 00:18:18.017 { 00:18:18.017 "name": "spare", 00:18:18.017 "uuid": "29a83ff3-c45f-5774-9654-792d99966308", 00:18:18.017 "is_configured": true, 00:18:18.017 "data_offset": 256, 00:18:18.017 "data_size": 7936 00:18:18.017 }, 00:18:18.017 { 00:18:18.017 "name": "BaseBdev2", 00:18:18.017 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:18.017 "is_configured": true, 00:18:18.017 "data_offset": 256, 00:18:18.017 "data_size": 7936 00:18:18.017 } 00:18:18.017 ] 00:18:18.017 }' 00:18:18.018 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:18.277 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=728 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.277 "name": "raid_bdev1", 00:18:18.277 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:18.277 "strip_size_kb": 0, 00:18:18.277 "state": "online", 00:18:18.277 "raid_level": "raid1", 00:18:18.277 "superblock": true, 00:18:18.277 "num_base_bdevs": 2, 00:18:18.277 "num_base_bdevs_discovered": 2, 00:18:18.277 "num_base_bdevs_operational": 2, 00:18:18.277 "process": { 00:18:18.277 "type": "rebuild", 00:18:18.277 "target": "spare", 00:18:18.277 "progress": { 00:18:18.277 "blocks": 2816, 00:18:18.277 "percent": 35 00:18:18.277 } 00:18:18.277 }, 00:18:18.277 "base_bdevs_list": [ 00:18:18.277 { 00:18:18.277 "name": "spare", 00:18:18.277 "uuid": "29a83ff3-c45f-5774-9654-792d99966308", 00:18:18.277 "is_configured": true, 00:18:18.277 "data_offset": 256, 00:18:18.277 "data_size": 7936 00:18:18.277 }, 00:18:18.277 { 00:18:18.277 "name": "BaseBdev2", 00:18:18.277 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:18.277 "is_configured": true, 00:18:18.277 "data_offset": 256, 00:18:18.277 "data_size": 7936 00:18:18.277 } 00:18:18.277 ] 00:18:18.277 }' 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.277 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.537 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.537 10:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.474 "name": "raid_bdev1", 00:18:19.474 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:19.474 "strip_size_kb": 0, 00:18:19.474 "state": "online", 00:18:19.474 "raid_level": "raid1", 00:18:19.474 "superblock": true, 00:18:19.474 "num_base_bdevs": 2, 00:18:19.474 "num_base_bdevs_discovered": 2, 00:18:19.474 "num_base_bdevs_operational": 2, 00:18:19.474 "process": { 00:18:19.474 "type": "rebuild", 00:18:19.474 "target": "spare", 00:18:19.474 "progress": { 00:18:19.474 "blocks": 5888, 00:18:19.474 "percent": 74 00:18:19.474 } 00:18:19.474 }, 00:18:19.474 "base_bdevs_list": [ 00:18:19.474 { 00:18:19.474 "name": "spare", 00:18:19.474 "uuid": "29a83ff3-c45f-5774-9654-792d99966308", 00:18:19.474 "is_configured": true, 00:18:19.474 "data_offset": 256, 00:18:19.474 "data_size": 7936 00:18:19.474 }, 00:18:19.474 { 00:18:19.474 "name": "BaseBdev2", 00:18:19.474 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:19.474 "is_configured": true, 00:18:19.474 "data_offset": 256, 00:18:19.474 "data_size": 7936 00:18:19.474 } 00:18:19.474 ] 00:18:19.474 }' 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.474 10:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:20.409 [2024-11-15 10:46:41.241488] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:20.409 [2024-11-15 10:46:41.241614] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:20.409 [2024-11-15 10:46:41.241790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.668 "name": "raid_bdev1", 00:18:20.668 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:20.668 "strip_size_kb": 0, 00:18:20.668 "state": "online", 00:18:20.668 "raid_level": "raid1", 00:18:20.668 "superblock": true, 00:18:20.668 "num_base_bdevs": 2, 00:18:20.668 "num_base_bdevs_discovered": 2, 00:18:20.668 "num_base_bdevs_operational": 2, 00:18:20.668 "base_bdevs_list": [ 00:18:20.668 { 00:18:20.668 "name": "spare", 00:18:20.668 "uuid": "29a83ff3-c45f-5774-9654-792d99966308", 00:18:20.668 "is_configured": true, 00:18:20.668 "data_offset": 256, 00:18:20.668 "data_size": 7936 00:18:20.668 }, 00:18:20.668 { 00:18:20.668 "name": "BaseBdev2", 00:18:20.668 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:20.668 "is_configured": true, 00:18:20.668 "data_offset": 256, 00:18:20.668 "data_size": 7936 00:18:20.668 } 00:18:20.668 ] 00:18:20.668 }' 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.668 "name": "raid_bdev1", 00:18:20.668 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:20.668 "strip_size_kb": 0, 00:18:20.668 "state": "online", 00:18:20.668 "raid_level": "raid1", 00:18:20.668 "superblock": true, 00:18:20.668 "num_base_bdevs": 2, 00:18:20.668 "num_base_bdevs_discovered": 2, 00:18:20.668 "num_base_bdevs_operational": 2, 00:18:20.668 "base_bdevs_list": [ 00:18:20.668 { 00:18:20.668 "name": "spare", 00:18:20.668 "uuid": "29a83ff3-c45f-5774-9654-792d99966308", 00:18:20.668 "is_configured": true, 00:18:20.668 "data_offset": 256, 00:18:20.668 "data_size": 7936 00:18:20.668 }, 00:18:20.668 { 00:18:20.668 "name": "BaseBdev2", 00:18:20.668 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:20.668 "is_configured": true, 00:18:20.668 "data_offset": 256, 00:18:20.668 "data_size": 7936 00:18:20.668 } 00:18:20.668 ] 00:18:20.668 }' 00:18:20.668 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.927 "name": "raid_bdev1", 00:18:20.927 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:20.927 "strip_size_kb": 0, 00:18:20.927 "state": "online", 00:18:20.927 "raid_level": "raid1", 00:18:20.927 "superblock": true, 00:18:20.927 "num_base_bdevs": 2, 00:18:20.927 "num_base_bdevs_discovered": 2, 00:18:20.927 "num_base_bdevs_operational": 2, 00:18:20.927 "base_bdevs_list": [ 00:18:20.927 { 00:18:20.927 "name": "spare", 00:18:20.927 "uuid": "29a83ff3-c45f-5774-9654-792d99966308", 00:18:20.927 "is_configured": true, 00:18:20.927 "data_offset": 256, 00:18:20.927 "data_size": 7936 00:18:20.927 }, 00:18:20.927 { 00:18:20.927 "name": "BaseBdev2", 00:18:20.927 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:20.927 "is_configured": true, 00:18:20.927 "data_offset": 256, 00:18:20.927 "data_size": 7936 00:18:20.927 } 00:18:20.927 ] 00:18:20.927 }' 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.927 10:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.492 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:21.492 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.492 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.492 [2024-11-15 10:46:42.418421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.492 [2024-11-15 10:46:42.418638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.492 [2024-11-15 10:46:42.418763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.492 [2024-11-15 10:46:42.418866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.492 [2024-11-15 10:46:42.418885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:21.492 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.492 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:21.493 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:21.750 /dev/nbd0 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.750 1+0 records in 00:18:21.750 1+0 records out 00:18:21.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243119 s, 16.8 MB/s 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:21.750 10:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:22.008 /dev/nbd1 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.008 1+0 records in 00:18:22.008 1+0 records out 00:18:22.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455074 s, 9.0 MB/s 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:22.008 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:22.265 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:22.265 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:22.265 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:22.265 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:22.265 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:22.265 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:22.265 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:22.523 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:22.523 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:22.523 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:22.523 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:22.523 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:22.523 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:22.523 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:22.523 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:22.523 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:22.523 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.781 [2024-11-15 10:46:43.886368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:22.781 [2024-11-15 10:46:43.886451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.781 [2024-11-15 10:46:43.886485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:22.781 [2024-11-15 10:46:43.886500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.781 [2024-11-15 10:46:43.889571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.781 [2024-11-15 10:46:43.889618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:22.781 [2024-11-15 10:46:43.889742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:22.781 [2024-11-15 10:46:43.889808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:22.781 [2024-11-15 10:46:43.890009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:22.781 spare 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.781 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.039 [2024-11-15 10:46:43.990164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:23.039 [2024-11-15 10:46:43.990237] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:23.039 [2024-11-15 10:46:43.990684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:23.039 [2024-11-15 10:46:43.990955] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:23.039 [2024-11-15 10:46:43.990973] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:23.039 [2024-11-15 10:46:43.991231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.039 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.039 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:23.039 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.039 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.039 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.039 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.039 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:23.039 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.039 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.039 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.039 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.039 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.039 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.039 10:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.039 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.039 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.039 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.039 "name": "raid_bdev1", 00:18:23.039 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:23.039 "strip_size_kb": 0, 00:18:23.039 "state": "online", 00:18:23.039 "raid_level": "raid1", 00:18:23.039 "superblock": true, 00:18:23.039 "num_base_bdevs": 2, 00:18:23.039 "num_base_bdevs_discovered": 2, 00:18:23.039 "num_base_bdevs_operational": 2, 00:18:23.039 "base_bdevs_list": [ 00:18:23.039 { 00:18:23.039 "name": "spare", 00:18:23.039 "uuid": "29a83ff3-c45f-5774-9654-792d99966308", 00:18:23.039 "is_configured": true, 00:18:23.039 "data_offset": 256, 00:18:23.039 "data_size": 7936 00:18:23.039 }, 00:18:23.039 { 00:18:23.039 "name": "BaseBdev2", 00:18:23.039 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:23.039 "is_configured": true, 00:18:23.039 "data_offset": 256, 00:18:23.039 "data_size": 7936 00:18:23.039 } 00:18:23.039 ] 00:18:23.039 }' 00:18:23.039 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.039 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.607 "name": "raid_bdev1", 00:18:23.607 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:23.607 "strip_size_kb": 0, 00:18:23.607 "state": "online", 00:18:23.607 "raid_level": "raid1", 00:18:23.607 "superblock": true, 00:18:23.607 "num_base_bdevs": 2, 00:18:23.607 "num_base_bdevs_discovered": 2, 00:18:23.607 "num_base_bdevs_operational": 2, 00:18:23.607 "base_bdevs_list": [ 00:18:23.607 { 00:18:23.607 "name": "spare", 00:18:23.607 "uuid": "29a83ff3-c45f-5774-9654-792d99966308", 00:18:23.607 "is_configured": true, 00:18:23.607 "data_offset": 256, 00:18:23.607 "data_size": 7936 00:18:23.607 }, 00:18:23.607 { 00:18:23.607 "name": "BaseBdev2", 00:18:23.607 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:23.607 "is_configured": true, 00:18:23.607 "data_offset": 256, 00:18:23.607 "data_size": 7936 00:18:23.607 } 00:18:23.607 ] 00:18:23.607 }' 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.607 [2024-11-15 10:46:44.739385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.607 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.865 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.865 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.865 "name": "raid_bdev1", 00:18:23.865 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:23.865 "strip_size_kb": 0, 00:18:23.865 "state": "online", 00:18:23.866 "raid_level": "raid1", 00:18:23.866 "superblock": true, 00:18:23.866 "num_base_bdevs": 2, 00:18:23.866 "num_base_bdevs_discovered": 1, 00:18:23.866 "num_base_bdevs_operational": 1, 00:18:23.866 "base_bdevs_list": [ 00:18:23.866 { 00:18:23.866 "name": null, 00:18:23.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.866 "is_configured": false, 00:18:23.866 "data_offset": 0, 00:18:23.866 "data_size": 7936 00:18:23.866 }, 00:18:23.866 { 00:18:23.866 "name": "BaseBdev2", 00:18:23.866 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:23.866 "is_configured": true, 00:18:23.866 "data_offset": 256, 00:18:23.866 "data_size": 7936 00:18:23.866 } 00:18:23.866 ] 00:18:23.866 }' 00:18:23.866 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.866 10:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.124 10:46:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:24.124 10:46:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.125 10:46:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.125 [2024-11-15 10:46:45.235554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:24.125 [2024-11-15 10:46:45.235796] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:24.125 [2024-11-15 10:46:45.235825] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:24.125 [2024-11-15 10:46:45.235877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:24.125 [2024-11-15 10:46:45.251406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:24.125 10:46:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.125 10:46:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:24.125 [2024-11-15 10:46:45.253951] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.501 "name": "raid_bdev1", 00:18:25.501 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:25.501 "strip_size_kb": 0, 00:18:25.501 "state": "online", 00:18:25.501 "raid_level": "raid1", 00:18:25.501 "superblock": true, 00:18:25.501 "num_base_bdevs": 2, 00:18:25.501 "num_base_bdevs_discovered": 2, 00:18:25.501 "num_base_bdevs_operational": 2, 00:18:25.501 "process": { 00:18:25.501 "type": "rebuild", 00:18:25.501 "target": "spare", 00:18:25.501 "progress": { 00:18:25.501 "blocks": 2560, 00:18:25.501 "percent": 32 00:18:25.501 } 00:18:25.501 }, 00:18:25.501 "base_bdevs_list": [ 00:18:25.501 { 00:18:25.501 "name": "spare", 00:18:25.501 "uuid": "29a83ff3-c45f-5774-9654-792d99966308", 00:18:25.501 "is_configured": true, 00:18:25.501 "data_offset": 256, 00:18:25.501 "data_size": 7936 00:18:25.501 }, 00:18:25.501 { 00:18:25.501 "name": "BaseBdev2", 00:18:25.501 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:25.501 "is_configured": true, 00:18:25.501 "data_offset": 256, 00:18:25.501 "data_size": 7936 00:18:25.501 } 00:18:25.501 ] 00:18:25.501 }' 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.501 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.502 [2024-11-15 10:46:46.443535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:25.502 [2024-11-15 10:46:46.462767] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:25.502 [2024-11-15 10:46:46.462851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.502 [2024-11-15 10:46:46.462877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:25.502 [2024-11-15 10:46:46.462893] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.502 "name": "raid_bdev1", 00:18:25.502 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:25.502 "strip_size_kb": 0, 00:18:25.502 "state": "online", 00:18:25.502 "raid_level": "raid1", 00:18:25.502 "superblock": true, 00:18:25.502 "num_base_bdevs": 2, 00:18:25.502 "num_base_bdevs_discovered": 1, 00:18:25.502 "num_base_bdevs_operational": 1, 00:18:25.502 "base_bdevs_list": [ 00:18:25.502 { 00:18:25.502 "name": null, 00:18:25.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.502 "is_configured": false, 00:18:25.502 "data_offset": 0, 00:18:25.502 "data_size": 7936 00:18:25.502 }, 00:18:25.502 { 00:18:25.502 "name": "BaseBdev2", 00:18:25.502 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:25.502 "is_configured": true, 00:18:25.502 "data_offset": 256, 00:18:25.502 "data_size": 7936 00:18:25.502 } 00:18:25.502 ] 00:18:25.502 }' 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.502 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.069 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:26.069 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.069 10:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.069 [2024-11-15 10:46:46.990749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:26.069 [2024-11-15 10:46:46.990974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.069 [2024-11-15 10:46:46.991016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:26.069 [2024-11-15 10:46:46.991036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.069 [2024-11-15 10:46:46.991639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.069 [2024-11-15 10:46:46.991692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:26.069 [2024-11-15 10:46:46.991813] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:26.069 [2024-11-15 10:46:46.991839] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:26.069 [2024-11-15 10:46:46.991852] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:26.069 [2024-11-15 10:46:46.991894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.069 [2024-11-15 10:46:47.007337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:26.069 spare 00:18:26.069 10:46:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.069 10:46:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:26.069 [2024-11-15 10:46:47.009835] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.032 "name": "raid_bdev1", 00:18:27.032 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:27.032 "strip_size_kb": 0, 00:18:27.032 "state": "online", 00:18:27.032 "raid_level": "raid1", 00:18:27.032 "superblock": true, 00:18:27.032 "num_base_bdevs": 2, 00:18:27.032 "num_base_bdevs_discovered": 2, 00:18:27.032 "num_base_bdevs_operational": 2, 00:18:27.032 "process": { 00:18:27.032 "type": "rebuild", 00:18:27.032 "target": "spare", 00:18:27.032 "progress": { 00:18:27.032 "blocks": 2560, 00:18:27.032 "percent": 32 00:18:27.032 } 00:18:27.032 }, 00:18:27.032 "base_bdevs_list": [ 00:18:27.032 { 00:18:27.032 "name": "spare", 00:18:27.032 "uuid": "29a83ff3-c45f-5774-9654-792d99966308", 00:18:27.032 "is_configured": true, 00:18:27.032 "data_offset": 256, 00:18:27.032 "data_size": 7936 00:18:27.032 }, 00:18:27.032 { 00:18:27.032 "name": "BaseBdev2", 00:18:27.032 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:27.032 "is_configured": true, 00:18:27.032 "data_offset": 256, 00:18:27.032 "data_size": 7936 00:18:27.032 } 00:18:27.032 ] 00:18:27.032 }' 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.032 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.032 [2024-11-15 10:46:48.183534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.291 [2024-11-15 10:46:48.218694] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:27.291 [2024-11-15 10:46:48.218776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.291 [2024-11-15 10:46:48.218806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.291 [2024-11-15 10:46:48.218819] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.291 "name": "raid_bdev1", 00:18:27.291 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:27.291 "strip_size_kb": 0, 00:18:27.291 "state": "online", 00:18:27.291 "raid_level": "raid1", 00:18:27.291 "superblock": true, 00:18:27.291 "num_base_bdevs": 2, 00:18:27.291 "num_base_bdevs_discovered": 1, 00:18:27.291 "num_base_bdevs_operational": 1, 00:18:27.291 "base_bdevs_list": [ 00:18:27.291 { 00:18:27.291 "name": null, 00:18:27.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.291 "is_configured": false, 00:18:27.291 "data_offset": 0, 00:18:27.291 "data_size": 7936 00:18:27.291 }, 00:18:27.291 { 00:18:27.291 "name": "BaseBdev2", 00:18:27.291 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:27.291 "is_configured": true, 00:18:27.291 "data_offset": 256, 00:18:27.291 "data_size": 7936 00:18:27.291 } 00:18:27.291 ] 00:18:27.291 }' 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.291 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.859 "name": "raid_bdev1", 00:18:27.859 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:27.859 "strip_size_kb": 0, 00:18:27.859 "state": "online", 00:18:27.859 "raid_level": "raid1", 00:18:27.859 "superblock": true, 00:18:27.859 "num_base_bdevs": 2, 00:18:27.859 "num_base_bdevs_discovered": 1, 00:18:27.859 "num_base_bdevs_operational": 1, 00:18:27.859 "base_bdevs_list": [ 00:18:27.859 { 00:18:27.859 "name": null, 00:18:27.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.859 "is_configured": false, 00:18:27.859 "data_offset": 0, 00:18:27.859 "data_size": 7936 00:18:27.859 }, 00:18:27.859 { 00:18:27.859 "name": "BaseBdev2", 00:18:27.859 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:27.859 "is_configured": true, 00:18:27.859 "data_offset": 256, 00:18:27.859 "data_size": 7936 00:18:27.859 } 00:18:27.859 ] 00:18:27.859 }' 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.859 [2024-11-15 10:46:48.922707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:27.859 [2024-11-15 10:46:48.922770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.859 [2024-11-15 10:46:48.922802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:27.859 [2024-11-15 10:46:48.922829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.859 [2024-11-15 10:46:48.923391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.859 [2024-11-15 10:46:48.923424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:27.859 [2024-11-15 10:46:48.923553] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:27.859 [2024-11-15 10:46:48.923576] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:27.859 [2024-11-15 10:46:48.923603] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:27.859 [2024-11-15 10:46:48.923617] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:27.859 BaseBdev1 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.859 10:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.795 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.054 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.054 "name": "raid_bdev1", 00:18:29.054 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:29.054 "strip_size_kb": 0, 00:18:29.054 "state": "online", 00:18:29.054 "raid_level": "raid1", 00:18:29.054 "superblock": true, 00:18:29.054 "num_base_bdevs": 2, 00:18:29.054 "num_base_bdevs_discovered": 1, 00:18:29.054 "num_base_bdevs_operational": 1, 00:18:29.054 "base_bdevs_list": [ 00:18:29.054 { 00:18:29.054 "name": null, 00:18:29.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.054 "is_configured": false, 00:18:29.054 "data_offset": 0, 00:18:29.054 "data_size": 7936 00:18:29.054 }, 00:18:29.054 { 00:18:29.054 "name": "BaseBdev2", 00:18:29.054 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:29.054 "is_configured": true, 00:18:29.054 "data_offset": 256, 00:18:29.054 "data_size": 7936 00:18:29.054 } 00:18:29.054 ] 00:18:29.054 }' 00:18:29.054 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.054 10:46:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.312 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:29.312 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.312 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:29.312 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:29.312 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.312 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.312 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.312 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.312 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.312 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.571 "name": "raid_bdev1", 00:18:29.571 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:29.571 "strip_size_kb": 0, 00:18:29.571 "state": "online", 00:18:29.571 "raid_level": "raid1", 00:18:29.571 "superblock": true, 00:18:29.571 "num_base_bdevs": 2, 00:18:29.571 "num_base_bdevs_discovered": 1, 00:18:29.571 "num_base_bdevs_operational": 1, 00:18:29.571 "base_bdevs_list": [ 00:18:29.571 { 00:18:29.571 "name": null, 00:18:29.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.571 "is_configured": false, 00:18:29.571 "data_offset": 0, 00:18:29.571 "data_size": 7936 00:18:29.571 }, 00:18:29.571 { 00:18:29.571 "name": "BaseBdev2", 00:18:29.571 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:29.571 "is_configured": true, 00:18:29.571 "data_offset": 256, 00:18:29.571 "data_size": 7936 00:18:29.571 } 00:18:29.571 ] 00:18:29.571 }' 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.571 [2024-11-15 10:46:50.607220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:29.571 [2024-11-15 10:46:50.607579] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:29.571 [2024-11-15 10:46:50.607620] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:29.571 request: 00:18:29.571 { 00:18:29.571 "base_bdev": "BaseBdev1", 00:18:29.571 "raid_bdev": "raid_bdev1", 00:18:29.571 "method": "bdev_raid_add_base_bdev", 00:18:29.571 "req_id": 1 00:18:29.571 } 00:18:29.571 Got JSON-RPC error response 00:18:29.571 response: 00:18:29.571 { 00:18:29.571 "code": -22, 00:18:29.571 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:29.571 } 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:29.571 10:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.507 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.766 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.766 "name": "raid_bdev1", 00:18:30.766 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:30.766 "strip_size_kb": 0, 00:18:30.766 "state": "online", 00:18:30.766 "raid_level": "raid1", 00:18:30.766 "superblock": true, 00:18:30.766 "num_base_bdevs": 2, 00:18:30.766 "num_base_bdevs_discovered": 1, 00:18:30.766 "num_base_bdevs_operational": 1, 00:18:30.766 "base_bdevs_list": [ 00:18:30.766 { 00:18:30.766 "name": null, 00:18:30.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.766 "is_configured": false, 00:18:30.766 "data_offset": 0, 00:18:30.766 "data_size": 7936 00:18:30.766 }, 00:18:30.766 { 00:18:30.766 "name": "BaseBdev2", 00:18:30.766 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:30.766 "is_configured": true, 00:18:30.766 "data_offset": 256, 00:18:30.766 "data_size": 7936 00:18:30.766 } 00:18:30.766 ] 00:18:30.766 }' 00:18:30.766 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.766 10:46:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.024 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:31.025 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.025 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:31.025 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:31.025 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.025 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.025 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.025 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.025 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.025 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.283 "name": "raid_bdev1", 00:18:31.283 "uuid": "ea522cda-2a0f-4f41-a77d-91c47edad7a4", 00:18:31.283 "strip_size_kb": 0, 00:18:31.283 "state": "online", 00:18:31.283 "raid_level": "raid1", 00:18:31.283 "superblock": true, 00:18:31.283 "num_base_bdevs": 2, 00:18:31.283 "num_base_bdevs_discovered": 1, 00:18:31.283 "num_base_bdevs_operational": 1, 00:18:31.283 "base_bdevs_list": [ 00:18:31.283 { 00:18:31.283 "name": null, 00:18:31.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.283 "is_configured": false, 00:18:31.283 "data_offset": 0, 00:18:31.283 "data_size": 7936 00:18:31.283 }, 00:18:31.283 { 00:18:31.283 "name": "BaseBdev2", 00:18:31.283 "uuid": "2a769dcd-4fa5-51ff-af41-8ba820485190", 00:18:31.283 "is_configured": true, 00:18:31.283 "data_offset": 256, 00:18:31.283 "data_size": 7936 00:18:31.283 } 00:18:31.283 ] 00:18:31.283 }' 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86877 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86877 ']' 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86877 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86877 00:18:31.283 killing process with pid 86877 00:18:31.283 Received shutdown signal, test time was about 60.000000 seconds 00:18:31.283 00:18:31.283 Latency(us) 00:18:31.283 [2024-11-15T10:46:52.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.283 [2024-11-15T10:46:52.445Z] =================================================================================================================== 00:18:31.283 [2024-11-15T10:46:52.445Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86877' 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86877 00:18:31.283 [2024-11-15 10:46:52.339463] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:31.283 10:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86877 00:18:31.283 [2024-11-15 10:46:52.339668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.283 [2024-11-15 10:46:52.339740] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.283 [2024-11-15 10:46:52.339763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:31.542 [2024-11-15 10:46:52.615430] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.478 10:46:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:32.478 00:18:32.478 real 0m21.302s 00:18:32.478 user 0m28.876s 00:18:32.478 sys 0m2.426s 00:18:32.478 ************************************ 00:18:32.478 END TEST raid_rebuild_test_sb_4k 00:18:32.478 ************************************ 00:18:32.478 10:46:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.478 10:46:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.737 10:46:53 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:32.737 10:46:53 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:32.737 10:46:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:32.737 10:46:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.737 10:46:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.737 ************************************ 00:18:32.737 START TEST raid_state_function_test_sb_md_separate 00:18:32.737 ************************************ 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87585 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:32.737 Process raid pid: 87585 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87585' 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87585 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87585 ']' 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.737 10:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.737 [2024-11-15 10:46:53.785992] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:18:32.737 [2024-11-15 10:46:53.786146] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.995 [2024-11-15 10:46:53.963535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.995 [2024-11-15 10:46:54.098325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.253 [2024-11-15 10:46:54.308292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.253 [2024-11-15 10:46:54.308331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.818 [2024-11-15 10:46:54.792914] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:33.818 [2024-11-15 10:46:54.792990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:33.818 [2024-11-15 10:46:54.793008] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:33.818 [2024-11-15 10:46:54.793024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.818 "name": "Existed_Raid", 00:18:33.818 "uuid": "c1ab05ee-18a2-4b8a-b046-cfbdf2c1e757", 00:18:33.818 "strip_size_kb": 0, 00:18:33.818 "state": "configuring", 00:18:33.818 "raid_level": "raid1", 00:18:33.818 "superblock": true, 00:18:33.818 "num_base_bdevs": 2, 00:18:33.818 "num_base_bdevs_discovered": 0, 00:18:33.818 "num_base_bdevs_operational": 2, 00:18:33.818 "base_bdevs_list": [ 00:18:33.818 { 00:18:33.818 "name": "BaseBdev1", 00:18:33.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.818 "is_configured": false, 00:18:33.818 "data_offset": 0, 00:18:33.818 "data_size": 0 00:18:33.818 }, 00:18:33.818 { 00:18:33.818 "name": "BaseBdev2", 00:18:33.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.818 "is_configured": false, 00:18:33.818 "data_offset": 0, 00:18:33.818 "data_size": 0 00:18:33.818 } 00:18:33.818 ] 00:18:33.818 }' 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.818 10:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.383 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:34.383 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.383 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.383 [2024-11-15 10:46:55.305004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:34.383 [2024-11-15 10:46:55.305181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.384 [2024-11-15 10:46:55.312996] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:34.384 [2024-11-15 10:46:55.313057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:34.384 [2024-11-15 10:46:55.313075] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:34.384 [2024-11-15 10:46:55.313094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.384 [2024-11-15 10:46:55.359312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.384 BaseBdev1 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.384 [ 00:18:34.384 { 00:18:34.384 "name": "BaseBdev1", 00:18:34.384 "aliases": [ 00:18:34.384 "3de0a394-2fbf-402f-aa63-02a94e91f8d5" 00:18:34.384 ], 00:18:34.384 "product_name": "Malloc disk", 00:18:34.384 "block_size": 4096, 00:18:34.384 "num_blocks": 8192, 00:18:34.384 "uuid": "3de0a394-2fbf-402f-aa63-02a94e91f8d5", 00:18:34.384 "md_size": 32, 00:18:34.384 "md_interleave": false, 00:18:34.384 "dif_type": 0, 00:18:34.384 "assigned_rate_limits": { 00:18:34.384 "rw_ios_per_sec": 0, 00:18:34.384 "rw_mbytes_per_sec": 0, 00:18:34.384 "r_mbytes_per_sec": 0, 00:18:34.384 "w_mbytes_per_sec": 0 00:18:34.384 }, 00:18:34.384 "claimed": true, 00:18:34.384 "claim_type": "exclusive_write", 00:18:34.384 "zoned": false, 00:18:34.384 "supported_io_types": { 00:18:34.384 "read": true, 00:18:34.384 "write": true, 00:18:34.384 "unmap": true, 00:18:34.384 "flush": true, 00:18:34.384 "reset": true, 00:18:34.384 "nvme_admin": false, 00:18:34.384 "nvme_io": false, 00:18:34.384 "nvme_io_md": false, 00:18:34.384 "write_zeroes": true, 00:18:34.384 "zcopy": true, 00:18:34.384 "get_zone_info": false, 00:18:34.384 "zone_management": false, 00:18:34.384 "zone_append": false, 00:18:34.384 "compare": false, 00:18:34.384 "compare_and_write": false, 00:18:34.384 "abort": true, 00:18:34.384 "seek_hole": false, 00:18:34.384 "seek_data": false, 00:18:34.384 "copy": true, 00:18:34.384 "nvme_iov_md": false 00:18:34.384 }, 00:18:34.384 "memory_domains": [ 00:18:34.384 { 00:18:34.384 "dma_device_id": "system", 00:18:34.384 "dma_device_type": 1 00:18:34.384 }, 00:18:34.384 { 00:18:34.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.384 "dma_device_type": 2 00:18:34.384 } 00:18:34.384 ], 00:18:34.384 "driver_specific": {} 00:18:34.384 } 00:18:34.384 ] 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.384 "name": "Existed_Raid", 00:18:34.384 "uuid": "08e09303-f75b-4df0-807a-6546bf61460c", 00:18:34.384 "strip_size_kb": 0, 00:18:34.384 "state": "configuring", 00:18:34.384 "raid_level": "raid1", 00:18:34.384 "superblock": true, 00:18:34.384 "num_base_bdevs": 2, 00:18:34.384 "num_base_bdevs_discovered": 1, 00:18:34.384 "num_base_bdevs_operational": 2, 00:18:34.384 "base_bdevs_list": [ 00:18:34.384 { 00:18:34.384 "name": "BaseBdev1", 00:18:34.384 "uuid": "3de0a394-2fbf-402f-aa63-02a94e91f8d5", 00:18:34.384 "is_configured": true, 00:18:34.384 "data_offset": 256, 00:18:34.384 "data_size": 7936 00:18:34.384 }, 00:18:34.384 { 00:18:34.384 "name": "BaseBdev2", 00:18:34.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.384 "is_configured": false, 00:18:34.384 "data_offset": 0, 00:18:34.384 "data_size": 0 00:18:34.384 } 00:18:34.384 ] 00:18:34.384 }' 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.384 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.949 [2024-11-15 10:46:55.899558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:34.949 [2024-11-15 10:46:55.899757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.949 [2024-11-15 10:46:55.911602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.949 [2024-11-15 10:46:55.914073] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:34.949 [2024-11-15 10:46:55.914290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.949 "name": "Existed_Raid", 00:18:34.949 "uuid": "7fe156fc-8cf8-44f9-bdc5-bb4621164008", 00:18:34.949 "strip_size_kb": 0, 00:18:34.949 "state": "configuring", 00:18:34.949 "raid_level": "raid1", 00:18:34.949 "superblock": true, 00:18:34.949 "num_base_bdevs": 2, 00:18:34.949 "num_base_bdevs_discovered": 1, 00:18:34.949 "num_base_bdevs_operational": 2, 00:18:34.949 "base_bdevs_list": [ 00:18:34.949 { 00:18:34.949 "name": "BaseBdev1", 00:18:34.949 "uuid": "3de0a394-2fbf-402f-aa63-02a94e91f8d5", 00:18:34.949 "is_configured": true, 00:18:34.949 "data_offset": 256, 00:18:34.949 "data_size": 7936 00:18:34.949 }, 00:18:34.949 { 00:18:34.949 "name": "BaseBdev2", 00:18:34.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.949 "is_configured": false, 00:18:34.949 "data_offset": 0, 00:18:34.949 "data_size": 0 00:18:34.949 } 00:18:34.949 ] 00:18:34.949 }' 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.949 10:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.516 [2024-11-15 10:46:56.473099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.516 [2024-11-15 10:46:56.473668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:35.516 [2024-11-15 10:46:56.473812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:35.516 [2024-11-15 10:46:56.473970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:35.516 BaseBdev2 00:18:35.516 [2024-11-15 10:46:56.474244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:35.516 [2024-11-15 10:46:56.474267] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:35.516 [2024-11-15 10:46:56.474396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.516 [ 00:18:35.516 { 00:18:35.516 "name": "BaseBdev2", 00:18:35.516 "aliases": [ 00:18:35.516 "1d1023c9-0b4d-4a5a-96a6-b9e2c2dc947a" 00:18:35.516 ], 00:18:35.516 "product_name": "Malloc disk", 00:18:35.516 "block_size": 4096, 00:18:35.516 "num_blocks": 8192, 00:18:35.516 "uuid": "1d1023c9-0b4d-4a5a-96a6-b9e2c2dc947a", 00:18:35.516 "md_size": 32, 00:18:35.516 "md_interleave": false, 00:18:35.516 "dif_type": 0, 00:18:35.516 "assigned_rate_limits": { 00:18:35.516 "rw_ios_per_sec": 0, 00:18:35.516 "rw_mbytes_per_sec": 0, 00:18:35.516 "r_mbytes_per_sec": 0, 00:18:35.516 "w_mbytes_per_sec": 0 00:18:35.516 }, 00:18:35.516 "claimed": true, 00:18:35.516 "claim_type": "exclusive_write", 00:18:35.516 "zoned": false, 00:18:35.516 "supported_io_types": { 00:18:35.516 "read": true, 00:18:35.516 "write": true, 00:18:35.516 "unmap": true, 00:18:35.516 "flush": true, 00:18:35.516 "reset": true, 00:18:35.516 "nvme_admin": false, 00:18:35.516 "nvme_io": false, 00:18:35.516 "nvme_io_md": false, 00:18:35.516 "write_zeroes": true, 00:18:35.516 "zcopy": true, 00:18:35.516 "get_zone_info": false, 00:18:35.516 "zone_management": false, 00:18:35.516 "zone_append": false, 00:18:35.516 "compare": false, 00:18:35.516 "compare_and_write": false, 00:18:35.516 "abort": true, 00:18:35.516 "seek_hole": false, 00:18:35.516 "seek_data": false, 00:18:35.516 "copy": true, 00:18:35.516 "nvme_iov_md": false 00:18:35.516 }, 00:18:35.516 "memory_domains": [ 00:18:35.516 { 00:18:35.516 "dma_device_id": "system", 00:18:35.516 "dma_device_type": 1 00:18:35.516 }, 00:18:35.516 { 00:18:35.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.516 "dma_device_type": 2 00:18:35.516 } 00:18:35.516 ], 00:18:35.516 "driver_specific": {} 00:18:35.516 } 00:18:35.516 ] 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.516 "name": "Existed_Raid", 00:18:35.516 "uuid": "7fe156fc-8cf8-44f9-bdc5-bb4621164008", 00:18:35.516 "strip_size_kb": 0, 00:18:35.516 "state": "online", 00:18:35.516 "raid_level": "raid1", 00:18:35.516 "superblock": true, 00:18:35.516 "num_base_bdevs": 2, 00:18:35.516 "num_base_bdevs_discovered": 2, 00:18:35.516 "num_base_bdevs_operational": 2, 00:18:35.516 "base_bdevs_list": [ 00:18:35.516 { 00:18:35.516 "name": "BaseBdev1", 00:18:35.516 "uuid": "3de0a394-2fbf-402f-aa63-02a94e91f8d5", 00:18:35.516 "is_configured": true, 00:18:35.516 "data_offset": 256, 00:18:35.516 "data_size": 7936 00:18:35.516 }, 00:18:35.516 { 00:18:35.516 "name": "BaseBdev2", 00:18:35.516 "uuid": "1d1023c9-0b4d-4a5a-96a6-b9e2c2dc947a", 00:18:35.516 "is_configured": true, 00:18:35.516 "data_offset": 256, 00:18:35.516 "data_size": 7936 00:18:35.516 } 00:18:35.516 ] 00:18:35.516 }' 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.516 10:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.081 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:36.081 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:36.081 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:36.081 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:36.081 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:36.081 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:36.081 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:36.081 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:36.081 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.081 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.081 [2024-11-15 10:46:57.061708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.081 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.081 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:36.081 "name": "Existed_Raid", 00:18:36.081 "aliases": [ 00:18:36.081 "7fe156fc-8cf8-44f9-bdc5-bb4621164008" 00:18:36.081 ], 00:18:36.081 "product_name": "Raid Volume", 00:18:36.081 "block_size": 4096, 00:18:36.081 "num_blocks": 7936, 00:18:36.081 "uuid": "7fe156fc-8cf8-44f9-bdc5-bb4621164008", 00:18:36.081 "md_size": 32, 00:18:36.081 "md_interleave": false, 00:18:36.081 "dif_type": 0, 00:18:36.081 "assigned_rate_limits": { 00:18:36.081 "rw_ios_per_sec": 0, 00:18:36.081 "rw_mbytes_per_sec": 0, 00:18:36.081 "r_mbytes_per_sec": 0, 00:18:36.081 "w_mbytes_per_sec": 0 00:18:36.082 }, 00:18:36.082 "claimed": false, 00:18:36.082 "zoned": false, 00:18:36.082 "supported_io_types": { 00:18:36.082 "read": true, 00:18:36.082 "write": true, 00:18:36.082 "unmap": false, 00:18:36.082 "flush": false, 00:18:36.082 "reset": true, 00:18:36.082 "nvme_admin": false, 00:18:36.082 "nvme_io": false, 00:18:36.082 "nvme_io_md": false, 00:18:36.082 "write_zeroes": true, 00:18:36.082 "zcopy": false, 00:18:36.082 "get_zone_info": false, 00:18:36.082 "zone_management": false, 00:18:36.082 "zone_append": false, 00:18:36.082 "compare": false, 00:18:36.082 "compare_and_write": false, 00:18:36.082 "abort": false, 00:18:36.082 "seek_hole": false, 00:18:36.082 "seek_data": false, 00:18:36.082 "copy": false, 00:18:36.082 "nvme_iov_md": false 00:18:36.082 }, 00:18:36.082 "memory_domains": [ 00:18:36.082 { 00:18:36.082 "dma_device_id": "system", 00:18:36.082 "dma_device_type": 1 00:18:36.082 }, 00:18:36.082 { 00:18:36.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.082 "dma_device_type": 2 00:18:36.082 }, 00:18:36.082 { 00:18:36.082 "dma_device_id": "system", 00:18:36.082 "dma_device_type": 1 00:18:36.082 }, 00:18:36.082 { 00:18:36.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.082 "dma_device_type": 2 00:18:36.082 } 00:18:36.082 ], 00:18:36.082 "driver_specific": { 00:18:36.082 "raid": { 00:18:36.082 "uuid": "7fe156fc-8cf8-44f9-bdc5-bb4621164008", 00:18:36.082 "strip_size_kb": 0, 00:18:36.082 "state": "online", 00:18:36.082 "raid_level": "raid1", 00:18:36.082 "superblock": true, 00:18:36.082 "num_base_bdevs": 2, 00:18:36.082 "num_base_bdevs_discovered": 2, 00:18:36.082 "num_base_bdevs_operational": 2, 00:18:36.082 "base_bdevs_list": [ 00:18:36.082 { 00:18:36.082 "name": "BaseBdev1", 00:18:36.082 "uuid": "3de0a394-2fbf-402f-aa63-02a94e91f8d5", 00:18:36.082 "is_configured": true, 00:18:36.082 "data_offset": 256, 00:18:36.082 "data_size": 7936 00:18:36.082 }, 00:18:36.082 { 00:18:36.082 "name": "BaseBdev2", 00:18:36.082 "uuid": "1d1023c9-0b4d-4a5a-96a6-b9e2c2dc947a", 00:18:36.082 "is_configured": true, 00:18:36.082 "data_offset": 256, 00:18:36.082 "data_size": 7936 00:18:36.082 } 00:18:36.082 ] 00:18:36.082 } 00:18:36.082 } 00:18:36.082 }' 00:18:36.082 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:36.082 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:36.082 BaseBdev2' 00:18:36.082 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.082 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:36.082 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.082 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:36.082 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.082 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.082 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.082 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.340 [2024-11-15 10:46:57.317401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.340 "name": "Existed_Raid", 00:18:36.340 "uuid": "7fe156fc-8cf8-44f9-bdc5-bb4621164008", 00:18:36.340 "strip_size_kb": 0, 00:18:36.340 "state": "online", 00:18:36.340 "raid_level": "raid1", 00:18:36.340 "superblock": true, 00:18:36.340 "num_base_bdevs": 2, 00:18:36.340 "num_base_bdevs_discovered": 1, 00:18:36.340 "num_base_bdevs_operational": 1, 00:18:36.340 "base_bdevs_list": [ 00:18:36.340 { 00:18:36.340 "name": null, 00:18:36.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.340 "is_configured": false, 00:18:36.340 "data_offset": 0, 00:18:36.340 "data_size": 7936 00:18:36.340 }, 00:18:36.340 { 00:18:36.340 "name": "BaseBdev2", 00:18:36.340 "uuid": "1d1023c9-0b4d-4a5a-96a6-b9e2c2dc947a", 00:18:36.340 "is_configured": true, 00:18:36.340 "data_offset": 256, 00:18:36.340 "data_size": 7936 00:18:36.340 } 00:18:36.340 ] 00:18:36.340 }' 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.340 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.906 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:36.906 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:36.906 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:36.906 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.906 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.906 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.906 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.906 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:36.906 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:36.906 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:36.906 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.906 10:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.906 [2024-11-15 10:46:57.993443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:36.906 [2024-11-15 10:46:57.993739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:37.164 [2024-11-15 10:46:58.085302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.164 [2024-11-15 10:46:58.085586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.164 [2024-11-15 10:46:58.085749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87585 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87585 ']' 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87585 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87585 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.164 killing process with pid 87585 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87585' 00:18:37.164 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87585 00:18:37.164 [2024-11-15 10:46:58.177483] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.165 10:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87585 00:18:37.165 [2024-11-15 10:46:58.192928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.099 ************************************ 00:18:38.099 END TEST raid_state_function_test_sb_md_separate 00:18:38.099 ************************************ 00:18:38.099 10:46:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:38.099 00:18:38.099 real 0m5.520s 00:18:38.099 user 0m8.397s 00:18:38.099 sys 0m0.754s 00:18:38.099 10:46:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.099 10:46:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.357 10:46:59 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:38.357 10:46:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:38.357 10:46:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.357 10:46:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.357 ************************************ 00:18:38.357 START TEST raid_superblock_test_md_separate 00:18:38.357 ************************************ 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:38.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87843 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87843 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87843 ']' 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.357 10:46:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.357 [2024-11-15 10:46:59.360605] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:18:38.358 [2024-11-15 10:46:59.361070] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87843 ] 00:18:38.616 [2024-11-15 10:46:59.537169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.616 [2024-11-15 10:46:59.666374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.873 [2024-11-15 10:46:59.863138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.874 [2024-11-15 10:46:59.863207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.437 malloc1 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.437 [2024-11-15 10:47:00.454006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:39.437 [2024-11-15 10:47:00.454251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.437 [2024-11-15 10:47:00.454330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:39.437 [2024-11-15 10:47:00.454560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.437 [2024-11-15 10:47:00.457193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.437 [2024-11-15 10:47:00.457387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:39.437 pt1 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.437 malloc2 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.437 [2024-11-15 10:47:00.510265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:39.437 [2024-11-15 10:47:00.510350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.437 [2024-11-15 10:47:00.510381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:39.437 [2024-11-15 10:47:00.510395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.437 [2024-11-15 10:47:00.513070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.437 [2024-11-15 10:47:00.513114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:39.437 pt2 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.437 [2024-11-15 10:47:00.522280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:39.437 [2024-11-15 10:47:00.524816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:39.437 [2024-11-15 10:47:00.525180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:39.437 [2024-11-15 10:47:00.525209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:39.437 [2024-11-15 10:47:00.525313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:39.437 [2024-11-15 10:47:00.525476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:39.437 [2024-11-15 10:47:00.525567] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:39.437 [2024-11-15 10:47:00.525717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.437 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.437 "name": "raid_bdev1", 00:18:39.437 "uuid": "ff41d81d-efa1-46f8-a1a9-15c99963d5f8", 00:18:39.437 "strip_size_kb": 0, 00:18:39.437 "state": "online", 00:18:39.437 "raid_level": "raid1", 00:18:39.437 "superblock": true, 00:18:39.437 "num_base_bdevs": 2, 00:18:39.437 "num_base_bdevs_discovered": 2, 00:18:39.437 "num_base_bdevs_operational": 2, 00:18:39.437 "base_bdevs_list": [ 00:18:39.437 { 00:18:39.437 "name": "pt1", 00:18:39.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.437 "is_configured": true, 00:18:39.437 "data_offset": 256, 00:18:39.437 "data_size": 7936 00:18:39.437 }, 00:18:39.437 { 00:18:39.438 "name": "pt2", 00:18:39.438 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.438 "is_configured": true, 00:18:39.438 "data_offset": 256, 00:18:39.438 "data_size": 7936 00:18:39.438 } 00:18:39.438 ] 00:18:39.438 }' 00:18:39.438 10:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.438 10:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:40.002 [2024-11-15 10:47:01.042777] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:40.002 "name": "raid_bdev1", 00:18:40.002 "aliases": [ 00:18:40.002 "ff41d81d-efa1-46f8-a1a9-15c99963d5f8" 00:18:40.002 ], 00:18:40.002 "product_name": "Raid Volume", 00:18:40.002 "block_size": 4096, 00:18:40.002 "num_blocks": 7936, 00:18:40.002 "uuid": "ff41d81d-efa1-46f8-a1a9-15c99963d5f8", 00:18:40.002 "md_size": 32, 00:18:40.002 "md_interleave": false, 00:18:40.002 "dif_type": 0, 00:18:40.002 "assigned_rate_limits": { 00:18:40.002 "rw_ios_per_sec": 0, 00:18:40.002 "rw_mbytes_per_sec": 0, 00:18:40.002 "r_mbytes_per_sec": 0, 00:18:40.002 "w_mbytes_per_sec": 0 00:18:40.002 }, 00:18:40.002 "claimed": false, 00:18:40.002 "zoned": false, 00:18:40.002 "supported_io_types": { 00:18:40.002 "read": true, 00:18:40.002 "write": true, 00:18:40.002 "unmap": false, 00:18:40.002 "flush": false, 00:18:40.002 "reset": true, 00:18:40.002 "nvme_admin": false, 00:18:40.002 "nvme_io": false, 00:18:40.002 "nvme_io_md": false, 00:18:40.002 "write_zeroes": true, 00:18:40.002 "zcopy": false, 00:18:40.002 "get_zone_info": false, 00:18:40.002 "zone_management": false, 00:18:40.002 "zone_append": false, 00:18:40.002 "compare": false, 00:18:40.002 "compare_and_write": false, 00:18:40.002 "abort": false, 00:18:40.002 "seek_hole": false, 00:18:40.002 "seek_data": false, 00:18:40.002 "copy": false, 00:18:40.002 "nvme_iov_md": false 00:18:40.002 }, 00:18:40.002 "memory_domains": [ 00:18:40.002 { 00:18:40.002 "dma_device_id": "system", 00:18:40.002 "dma_device_type": 1 00:18:40.002 }, 00:18:40.002 { 00:18:40.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.002 "dma_device_type": 2 00:18:40.002 }, 00:18:40.002 { 00:18:40.002 "dma_device_id": "system", 00:18:40.002 "dma_device_type": 1 00:18:40.002 }, 00:18:40.002 { 00:18:40.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.002 "dma_device_type": 2 00:18:40.002 } 00:18:40.002 ], 00:18:40.002 "driver_specific": { 00:18:40.002 "raid": { 00:18:40.002 "uuid": "ff41d81d-efa1-46f8-a1a9-15c99963d5f8", 00:18:40.002 "strip_size_kb": 0, 00:18:40.002 "state": "online", 00:18:40.002 "raid_level": "raid1", 00:18:40.002 "superblock": true, 00:18:40.002 "num_base_bdevs": 2, 00:18:40.002 "num_base_bdevs_discovered": 2, 00:18:40.002 "num_base_bdevs_operational": 2, 00:18:40.002 "base_bdevs_list": [ 00:18:40.002 { 00:18:40.002 "name": "pt1", 00:18:40.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.002 "is_configured": true, 00:18:40.002 "data_offset": 256, 00:18:40.002 "data_size": 7936 00:18:40.002 }, 00:18:40.002 { 00:18:40.002 "name": "pt2", 00:18:40.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.002 "is_configured": true, 00:18:40.002 "data_offset": 256, 00:18:40.002 "data_size": 7936 00:18:40.002 } 00:18:40.002 ] 00:18:40.002 } 00:18:40.002 } 00:18:40.002 }' 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:40.002 pt2' 00:18:40.002 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.259 [2024-11-15 10:47:01.302782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ff41d81d-efa1-46f8-a1a9-15c99963d5f8 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z ff41d81d-efa1-46f8-a1a9-15c99963d5f8 ']' 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.259 [2024-11-15 10:47:01.346396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.259 [2024-11-15 10:47:01.346426] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.259 [2024-11-15 10:47:01.346585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.259 [2024-11-15 10:47:01.346664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.259 [2024-11-15 10:47:01.346684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:40.259 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.260 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.517 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.517 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:40.517 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.517 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:40.517 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.517 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.517 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:40.517 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.518 [2024-11-15 10:47:01.478452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:40.518 [2024-11-15 10:47:01.481304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:40.518 [2024-11-15 10:47:01.481587] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:40.518 [2024-11-15 10:47:01.481814] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:40.518 [2024-11-15 10:47:01.482055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.518 [2024-11-15 10:47:01.482106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:40.518 request: 00:18:40.518 { 00:18:40.518 "name": "raid_bdev1", 00:18:40.518 "raid_level": "raid1", 00:18:40.518 "base_bdevs": [ 00:18:40.518 "malloc1", 00:18:40.518 "malloc2" 00:18:40.518 ], 00:18:40.518 "superblock": false, 00:18:40.518 "method": "bdev_raid_create", 00:18:40.518 "req_id": 1 00:18:40.518 } 00:18:40.518 Got JSON-RPC error response 00:18:40.518 response: 00:18:40.518 { 00:18:40.518 "code": -17, 00:18:40.518 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:40.518 } 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.518 [2024-11-15 10:47:01.546473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:40.518 [2024-11-15 10:47:01.546579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.518 [2024-11-15 10:47:01.546607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:40.518 [2024-11-15 10:47:01.546625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.518 [2024-11-15 10:47:01.549245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.518 [2024-11-15 10:47:01.549454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:40.518 [2024-11-15 10:47:01.549536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:40.518 [2024-11-15 10:47:01.549612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:40.518 pt1 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.518 "name": "raid_bdev1", 00:18:40.518 "uuid": "ff41d81d-efa1-46f8-a1a9-15c99963d5f8", 00:18:40.518 "strip_size_kb": 0, 00:18:40.518 "state": "configuring", 00:18:40.518 "raid_level": "raid1", 00:18:40.518 "superblock": true, 00:18:40.518 "num_base_bdevs": 2, 00:18:40.518 "num_base_bdevs_discovered": 1, 00:18:40.518 "num_base_bdevs_operational": 2, 00:18:40.518 "base_bdevs_list": [ 00:18:40.518 { 00:18:40.518 "name": "pt1", 00:18:40.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.518 "is_configured": true, 00:18:40.518 "data_offset": 256, 00:18:40.518 "data_size": 7936 00:18:40.518 }, 00:18:40.518 { 00:18:40.518 "name": null, 00:18:40.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.518 "is_configured": false, 00:18:40.518 "data_offset": 256, 00:18:40.518 "data_size": 7936 00:18:40.518 } 00:18:40.518 ] 00:18:40.518 }' 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.518 10:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.084 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:41.084 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:41.084 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:41.084 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.084 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.084 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.084 [2024-11-15 10:47:02.062663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.084 [2024-11-15 10:47:02.062759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.084 [2024-11-15 10:47:02.062790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:41.084 [2024-11-15 10:47:02.062809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.084 [2024-11-15 10:47:02.063102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.084 [2024-11-15 10:47:02.063131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.084 [2024-11-15 10:47:02.063197] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:41.084 [2024-11-15 10:47:02.063231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.084 [2024-11-15 10:47:02.063367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:41.084 [2024-11-15 10:47:02.063388] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:41.084 [2024-11-15 10:47:02.063474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:41.084 [2024-11-15 10:47:02.063652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:41.084 [2024-11-15 10:47:02.063669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:41.084 [2024-11-15 10:47:02.063789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.084 pt2 00:18:41.084 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.084 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.085 "name": "raid_bdev1", 00:18:41.085 "uuid": "ff41d81d-efa1-46f8-a1a9-15c99963d5f8", 00:18:41.085 "strip_size_kb": 0, 00:18:41.085 "state": "online", 00:18:41.085 "raid_level": "raid1", 00:18:41.085 "superblock": true, 00:18:41.085 "num_base_bdevs": 2, 00:18:41.085 "num_base_bdevs_discovered": 2, 00:18:41.085 "num_base_bdevs_operational": 2, 00:18:41.085 "base_bdevs_list": [ 00:18:41.085 { 00:18:41.085 "name": "pt1", 00:18:41.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:41.085 "is_configured": true, 00:18:41.085 "data_offset": 256, 00:18:41.085 "data_size": 7936 00:18:41.085 }, 00:18:41.085 { 00:18:41.085 "name": "pt2", 00:18:41.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.085 "is_configured": true, 00:18:41.085 "data_offset": 256, 00:18:41.085 "data_size": 7936 00:18:41.085 } 00:18:41.085 ] 00:18:41.085 }' 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.085 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.683 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:41.683 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:41.683 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:41.683 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:41.683 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:41.683 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:41.683 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:41.683 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:41.683 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.683 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.683 [2024-11-15 10:47:02.599152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.683 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.683 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:41.683 "name": "raid_bdev1", 00:18:41.683 "aliases": [ 00:18:41.683 "ff41d81d-efa1-46f8-a1a9-15c99963d5f8" 00:18:41.683 ], 00:18:41.683 "product_name": "Raid Volume", 00:18:41.683 "block_size": 4096, 00:18:41.683 "num_blocks": 7936, 00:18:41.683 "uuid": "ff41d81d-efa1-46f8-a1a9-15c99963d5f8", 00:18:41.683 "md_size": 32, 00:18:41.683 "md_interleave": false, 00:18:41.683 "dif_type": 0, 00:18:41.683 "assigned_rate_limits": { 00:18:41.683 "rw_ios_per_sec": 0, 00:18:41.683 "rw_mbytes_per_sec": 0, 00:18:41.683 "r_mbytes_per_sec": 0, 00:18:41.683 "w_mbytes_per_sec": 0 00:18:41.683 }, 00:18:41.683 "claimed": false, 00:18:41.683 "zoned": false, 00:18:41.683 "supported_io_types": { 00:18:41.683 "read": true, 00:18:41.683 "write": true, 00:18:41.683 "unmap": false, 00:18:41.683 "flush": false, 00:18:41.683 "reset": true, 00:18:41.683 "nvme_admin": false, 00:18:41.683 "nvme_io": false, 00:18:41.683 "nvme_io_md": false, 00:18:41.683 "write_zeroes": true, 00:18:41.683 "zcopy": false, 00:18:41.683 "get_zone_info": false, 00:18:41.683 "zone_management": false, 00:18:41.683 "zone_append": false, 00:18:41.683 "compare": false, 00:18:41.683 "compare_and_write": false, 00:18:41.683 "abort": false, 00:18:41.683 "seek_hole": false, 00:18:41.683 "seek_data": false, 00:18:41.683 "copy": false, 00:18:41.683 "nvme_iov_md": false 00:18:41.683 }, 00:18:41.683 "memory_domains": [ 00:18:41.683 { 00:18:41.684 "dma_device_id": "system", 00:18:41.684 "dma_device_type": 1 00:18:41.684 }, 00:18:41.684 { 00:18:41.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.684 "dma_device_type": 2 00:18:41.684 }, 00:18:41.684 { 00:18:41.684 "dma_device_id": "system", 00:18:41.684 "dma_device_type": 1 00:18:41.684 }, 00:18:41.684 { 00:18:41.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.684 "dma_device_type": 2 00:18:41.684 } 00:18:41.684 ], 00:18:41.684 "driver_specific": { 00:18:41.684 "raid": { 00:18:41.684 "uuid": "ff41d81d-efa1-46f8-a1a9-15c99963d5f8", 00:18:41.684 "strip_size_kb": 0, 00:18:41.684 "state": "online", 00:18:41.684 "raid_level": "raid1", 00:18:41.684 "superblock": true, 00:18:41.684 "num_base_bdevs": 2, 00:18:41.684 "num_base_bdevs_discovered": 2, 00:18:41.684 "num_base_bdevs_operational": 2, 00:18:41.684 "base_bdevs_list": [ 00:18:41.684 { 00:18:41.684 "name": "pt1", 00:18:41.684 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:41.684 "is_configured": true, 00:18:41.684 "data_offset": 256, 00:18:41.684 "data_size": 7936 00:18:41.684 }, 00:18:41.684 { 00:18:41.684 "name": "pt2", 00:18:41.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.684 "is_configured": true, 00:18:41.684 "data_offset": 256, 00:18:41.684 "data_size": 7936 00:18:41.684 } 00:18:41.684 ] 00:18:41.684 } 00:18:41.684 } 00:18:41.684 }' 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:41.684 pt2' 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.684 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.943 [2024-11-15 10:47:02.863227] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' ff41d81d-efa1-46f8-a1a9-15c99963d5f8 '!=' ff41d81d-efa1-46f8-a1a9-15c99963d5f8 ']' 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.943 [2024-11-15 10:47:02.906955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.943 "name": "raid_bdev1", 00:18:41.943 "uuid": "ff41d81d-efa1-46f8-a1a9-15c99963d5f8", 00:18:41.943 "strip_size_kb": 0, 00:18:41.943 "state": "online", 00:18:41.943 "raid_level": "raid1", 00:18:41.943 "superblock": true, 00:18:41.943 "num_base_bdevs": 2, 00:18:41.943 "num_base_bdevs_discovered": 1, 00:18:41.943 "num_base_bdevs_operational": 1, 00:18:41.943 "base_bdevs_list": [ 00:18:41.943 { 00:18:41.943 "name": null, 00:18:41.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.943 "is_configured": false, 00:18:41.943 "data_offset": 0, 00:18:41.943 "data_size": 7936 00:18:41.943 }, 00:18:41.943 { 00:18:41.943 "name": "pt2", 00:18:41.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.943 "is_configured": true, 00:18:41.943 "data_offset": 256, 00:18:41.943 "data_size": 7936 00:18:41.943 } 00:18:41.943 ] 00:18:41.943 }' 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.943 10:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.510 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:42.510 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.510 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.510 [2024-11-15 10:47:03.407043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:42.510 [2024-11-15 10:47:03.407076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:42.510 [2024-11-15 10:47:03.407167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:42.510 [2024-11-15 10:47:03.407230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:42.510 [2024-11-15 10:47:03.407248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:42.510 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.510 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.510 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.510 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.510 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:42.510 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.510 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:42.510 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.511 [2024-11-15 10:47:03.499065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:42.511 [2024-11-15 10:47:03.499150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.511 [2024-11-15 10:47:03.499176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:42.511 [2024-11-15 10:47:03.499192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.511 [2024-11-15 10:47:03.501935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.511 [2024-11-15 10:47:03.502168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:42.511 [2024-11-15 10:47:03.502246] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:42.511 [2024-11-15 10:47:03.502311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:42.511 [2024-11-15 10:47:03.502429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:42.511 [2024-11-15 10:47:03.502451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:42.511 [2024-11-15 10:47:03.502596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:42.511 [2024-11-15 10:47:03.502742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:42.511 [2024-11-15 10:47:03.502756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:42.511 [2024-11-15 10:47:03.502916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.511 pt2 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.511 "name": "raid_bdev1", 00:18:42.511 "uuid": "ff41d81d-efa1-46f8-a1a9-15c99963d5f8", 00:18:42.511 "strip_size_kb": 0, 00:18:42.511 "state": "online", 00:18:42.511 "raid_level": "raid1", 00:18:42.511 "superblock": true, 00:18:42.511 "num_base_bdevs": 2, 00:18:42.511 "num_base_bdevs_discovered": 1, 00:18:42.511 "num_base_bdevs_operational": 1, 00:18:42.511 "base_bdevs_list": [ 00:18:42.511 { 00:18:42.511 "name": null, 00:18:42.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.511 "is_configured": false, 00:18:42.511 "data_offset": 256, 00:18:42.511 "data_size": 7936 00:18:42.511 }, 00:18:42.511 { 00:18:42.511 "name": "pt2", 00:18:42.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.511 "is_configured": true, 00:18:42.511 "data_offset": 256, 00:18:42.511 "data_size": 7936 00:18:42.511 } 00:18:42.511 ] 00:18:42.511 }' 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.511 10:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.077 [2024-11-15 10:47:04.007233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.077 [2024-11-15 10:47:04.007271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.077 [2024-11-15 10:47:04.007360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.077 [2024-11-15 10:47:04.007426] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.077 [2024-11-15 10:47:04.007441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.077 [2024-11-15 10:47:04.063280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:43.077 [2024-11-15 10:47:04.063343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.077 [2024-11-15 10:47:04.063373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:43.077 [2024-11-15 10:47:04.063387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.077 [2024-11-15 10:47:04.066142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.077 [2024-11-15 10:47:04.066187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:43.077 [2024-11-15 10:47:04.066256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:43.077 [2024-11-15 10:47:04.066310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:43.077 [2024-11-15 10:47:04.066463] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:43.077 [2024-11-15 10:47:04.066480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.077 [2024-11-15 10:47:04.066551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:43.077 [2024-11-15 10:47:04.066628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:43.077 [2024-11-15 10:47:04.066727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:43.077 [2024-11-15 10:47:04.066743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:43.077 [2024-11-15 10:47:04.066834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:43.077 [2024-11-15 10:47:04.066971] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:43.077 [2024-11-15 10:47:04.066990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:43.077 [2024-11-15 10:47:04.067115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.077 pt1 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.077 "name": "raid_bdev1", 00:18:43.077 "uuid": "ff41d81d-efa1-46f8-a1a9-15c99963d5f8", 00:18:43.077 "strip_size_kb": 0, 00:18:43.077 "state": "online", 00:18:43.077 "raid_level": "raid1", 00:18:43.077 "superblock": true, 00:18:43.077 "num_base_bdevs": 2, 00:18:43.077 "num_base_bdevs_discovered": 1, 00:18:43.077 "num_base_bdevs_operational": 1, 00:18:43.077 "base_bdevs_list": [ 00:18:43.077 { 00:18:43.077 "name": null, 00:18:43.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.077 "is_configured": false, 00:18:43.077 "data_offset": 256, 00:18:43.077 "data_size": 7936 00:18:43.077 }, 00:18:43.077 { 00:18:43.077 "name": "pt2", 00:18:43.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.077 "is_configured": true, 00:18:43.077 "data_offset": 256, 00:18:43.077 "data_size": 7936 00:18:43.077 } 00:18:43.077 ] 00:18:43.077 }' 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.077 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.643 [2024-11-15 10:47:04.627742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' ff41d81d-efa1-46f8-a1a9-15c99963d5f8 '!=' ff41d81d-efa1-46f8-a1a9-15c99963d5f8 ']' 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87843 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87843 ']' 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87843 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87843 00:18:43.643 killing process with pid 87843 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87843' 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87843 00:18:43.643 [2024-11-15 10:47:04.708884] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:43.643 [2024-11-15 10:47:04.708983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.643 10:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87843 00:18:43.643 [2024-11-15 10:47:04.709045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.643 [2024-11-15 10:47:04.709070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:43.901 [2024-11-15 10:47:04.904601] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:44.835 10:47:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:44.835 00:18:44.835 real 0m6.623s 00:18:44.835 user 0m10.540s 00:18:44.835 sys 0m0.935s 00:18:44.835 ************************************ 00:18:44.835 END TEST raid_superblock_test_md_separate 00:18:44.835 ************************************ 00:18:44.835 10:47:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.835 10:47:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.835 10:47:05 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:44.835 10:47:05 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:44.835 10:47:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:44.835 10:47:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.835 10:47:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.835 ************************************ 00:18:44.835 START TEST raid_rebuild_test_sb_md_separate 00:18:44.835 ************************************ 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:44.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88166 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88166 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88166 ']' 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.835 10:47:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.093 [2024-11-15 10:47:06.072038] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:18:45.093 [2024-11-15 10:47:06.072435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88166 ] 00:18:45.093 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:45.093 Zero copy mechanism will not be used. 00:18:45.093 [2024-11-15 10:47:06.250485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.352 [2024-11-15 10:47:06.380082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.609 [2024-11-15 10:47:06.587174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.609 [2024-11-15 10:47:06.587466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.867 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.867 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:45.867 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:45.867 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:45.867 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.867 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.125 BaseBdev1_malloc 00:18:46.125 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.125 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:46.125 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.125 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.126 [2024-11-15 10:47:07.074137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:46.126 [2024-11-15 10:47:07.074226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.126 [2024-11-15 10:47:07.074257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:46.126 [2024-11-15 10:47:07.074275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.126 [2024-11-15 10:47:07.077322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.126 [2024-11-15 10:47:07.077370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:46.126 BaseBdev1 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.126 BaseBdev2_malloc 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.126 [2024-11-15 10:47:07.133895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:46.126 [2024-11-15 10:47:07.133987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.126 [2024-11-15 10:47:07.134031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:46.126 [2024-11-15 10:47:07.134051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.126 [2024-11-15 10:47:07.136684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.126 [2024-11-15 10:47:07.136747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:46.126 BaseBdev2 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.126 spare_malloc 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.126 spare_delay 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.126 [2024-11-15 10:47:07.205132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:46.126 [2024-11-15 10:47:07.205218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.126 [2024-11-15 10:47:07.205249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:46.126 [2024-11-15 10:47:07.205267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.126 [2024-11-15 10:47:07.208037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.126 [2024-11-15 10:47:07.208100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:46.126 spare 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.126 [2024-11-15 10:47:07.217210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:46.126 [2024-11-15 10:47:07.219949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.126 [2024-11-15 10:47:07.220368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:46.126 [2024-11-15 10:47:07.220523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:46.126 [2024-11-15 10:47:07.220692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:46.126 [2024-11-15 10:47:07.220921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:46.126 [2024-11-15 10:47:07.220939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:46.126 [2024-11-15 10:47:07.221132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.126 "name": "raid_bdev1", 00:18:46.126 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:46.126 "strip_size_kb": 0, 00:18:46.126 "state": "online", 00:18:46.126 "raid_level": "raid1", 00:18:46.126 "superblock": true, 00:18:46.126 "num_base_bdevs": 2, 00:18:46.126 "num_base_bdevs_discovered": 2, 00:18:46.126 "num_base_bdevs_operational": 2, 00:18:46.126 "base_bdevs_list": [ 00:18:46.126 { 00:18:46.126 "name": "BaseBdev1", 00:18:46.126 "uuid": "d7e84a24-5c10-56e5-9028-0cb1c5621c9e", 00:18:46.126 "is_configured": true, 00:18:46.126 "data_offset": 256, 00:18:46.126 "data_size": 7936 00:18:46.126 }, 00:18:46.126 { 00:18:46.126 "name": "BaseBdev2", 00:18:46.126 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:46.126 "is_configured": true, 00:18:46.126 "data_offset": 256, 00:18:46.126 "data_size": 7936 00:18:46.126 } 00:18:46.126 ] 00:18:46.126 }' 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.126 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.693 [2024-11-15 10:47:07.733798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.693 10:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:46.951 [2024-11-15 10:47:08.089637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:46.951 /dev/nbd0 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:47.209 1+0 records in 00:18:47.209 1+0 records out 00:18:47.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583617 s, 7.0 MB/s 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:47.209 10:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:48.143 7936+0 records in 00:18:48.143 7936+0 records out 00:18:48.143 32505856 bytes (33 MB, 31 MiB) copied, 0.922295 s, 35.2 MB/s 00:18:48.143 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:48.143 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.143 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:48.143 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:48.143 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:48.143 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.143 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:48.400 [2024-11-15 10:47:09.386252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.400 [2024-11-15 10:47:09.393963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.400 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.401 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.401 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.401 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.401 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.401 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.401 "name": "raid_bdev1", 00:18:48.401 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:48.401 "strip_size_kb": 0, 00:18:48.401 "state": "online", 00:18:48.401 "raid_level": "raid1", 00:18:48.401 "superblock": true, 00:18:48.401 "num_base_bdevs": 2, 00:18:48.401 "num_base_bdevs_discovered": 1, 00:18:48.401 "num_base_bdevs_operational": 1, 00:18:48.401 "base_bdevs_list": [ 00:18:48.401 { 00:18:48.401 "name": null, 00:18:48.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.401 "is_configured": false, 00:18:48.401 "data_offset": 0, 00:18:48.401 "data_size": 7936 00:18:48.401 }, 00:18:48.401 { 00:18:48.401 "name": "BaseBdev2", 00:18:48.401 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:48.401 "is_configured": true, 00:18:48.401 "data_offset": 256, 00:18:48.401 "data_size": 7936 00:18:48.401 } 00:18:48.401 ] 00:18:48.401 }' 00:18:48.401 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.401 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.967 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:48.967 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.967 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.967 [2024-11-15 10:47:09.906219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.967 [2024-11-15 10:47:09.921294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:48.967 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.967 10:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:48.967 [2024-11-15 10:47:09.923918] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:49.901 10:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.901 10:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.901 10:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.901 10:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.901 10:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.901 10:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.901 10:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.901 10:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.901 10:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.901 10:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.901 10:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.901 "name": "raid_bdev1", 00:18:49.901 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:49.901 "strip_size_kb": 0, 00:18:49.901 "state": "online", 00:18:49.901 "raid_level": "raid1", 00:18:49.901 "superblock": true, 00:18:49.901 "num_base_bdevs": 2, 00:18:49.901 "num_base_bdevs_discovered": 2, 00:18:49.901 "num_base_bdevs_operational": 2, 00:18:49.901 "process": { 00:18:49.901 "type": "rebuild", 00:18:49.901 "target": "spare", 00:18:49.901 "progress": { 00:18:49.901 "blocks": 2560, 00:18:49.901 "percent": 32 00:18:49.901 } 00:18:49.901 }, 00:18:49.901 "base_bdevs_list": [ 00:18:49.901 { 00:18:49.901 "name": "spare", 00:18:49.901 "uuid": "1a8e89bc-d857-5985-9f2e-b66a99b6205a", 00:18:49.901 "is_configured": true, 00:18:49.901 "data_offset": 256, 00:18:49.901 "data_size": 7936 00:18:49.901 }, 00:18:49.901 { 00:18:49.901 "name": "BaseBdev2", 00:18:49.901 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:49.901 "is_configured": true, 00:18:49.901 "data_offset": 256, 00:18:49.901 "data_size": 7936 00:18:49.901 } 00:18:49.901 ] 00:18:49.901 }' 00:18:49.901 10:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.901 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.901 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.161 [2024-11-15 10:47:11.093399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.161 [2024-11-15 10:47:11.133276] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:50.161 [2024-11-15 10:47:11.133393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.161 [2024-11-15 10:47:11.133418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.161 [2024-11-15 10:47:11.133434] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.161 "name": "raid_bdev1", 00:18:50.161 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:50.161 "strip_size_kb": 0, 00:18:50.161 "state": "online", 00:18:50.161 "raid_level": "raid1", 00:18:50.161 "superblock": true, 00:18:50.161 "num_base_bdevs": 2, 00:18:50.161 "num_base_bdevs_discovered": 1, 00:18:50.161 "num_base_bdevs_operational": 1, 00:18:50.161 "base_bdevs_list": [ 00:18:50.161 { 00:18:50.161 "name": null, 00:18:50.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.161 "is_configured": false, 00:18:50.161 "data_offset": 0, 00:18:50.161 "data_size": 7936 00:18:50.161 }, 00:18:50.161 { 00:18:50.161 "name": "BaseBdev2", 00:18:50.161 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:50.161 "is_configured": true, 00:18:50.161 "data_offset": 256, 00:18:50.161 "data_size": 7936 00:18:50.161 } 00:18:50.161 ] 00:18:50.161 }' 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.161 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.728 "name": "raid_bdev1", 00:18:50.728 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:50.728 "strip_size_kb": 0, 00:18:50.728 "state": "online", 00:18:50.728 "raid_level": "raid1", 00:18:50.728 "superblock": true, 00:18:50.728 "num_base_bdevs": 2, 00:18:50.728 "num_base_bdevs_discovered": 1, 00:18:50.728 "num_base_bdevs_operational": 1, 00:18:50.728 "base_bdevs_list": [ 00:18:50.728 { 00:18:50.728 "name": null, 00:18:50.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.728 "is_configured": false, 00:18:50.728 "data_offset": 0, 00:18:50.728 "data_size": 7936 00:18:50.728 }, 00:18:50.728 { 00:18:50.728 "name": "BaseBdev2", 00:18:50.728 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:50.728 "is_configured": true, 00:18:50.728 "data_offset": 256, 00:18:50.728 "data_size": 7936 00:18:50.728 } 00:18:50.728 ] 00:18:50.728 }' 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.728 [2024-11-15 10:47:11.869547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.728 [2024-11-15 10:47:11.883441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.728 10:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:50.728 [2024-11-15 10:47:11.885907] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:52.108 10:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.108 10:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.108 10:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.108 10:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.108 10:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.108 10:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.108 10:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.108 10:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.109 10:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.109 10:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.109 10:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.109 "name": "raid_bdev1", 00:18:52.109 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:52.109 "strip_size_kb": 0, 00:18:52.109 "state": "online", 00:18:52.109 "raid_level": "raid1", 00:18:52.109 "superblock": true, 00:18:52.109 "num_base_bdevs": 2, 00:18:52.109 "num_base_bdevs_discovered": 2, 00:18:52.109 "num_base_bdevs_operational": 2, 00:18:52.109 "process": { 00:18:52.109 "type": "rebuild", 00:18:52.109 "target": "spare", 00:18:52.109 "progress": { 00:18:52.109 "blocks": 2560, 00:18:52.109 "percent": 32 00:18:52.109 } 00:18:52.109 }, 00:18:52.109 "base_bdevs_list": [ 00:18:52.109 { 00:18:52.109 "name": "spare", 00:18:52.109 "uuid": "1a8e89bc-d857-5985-9f2e-b66a99b6205a", 00:18:52.109 "is_configured": true, 00:18:52.109 "data_offset": 256, 00:18:52.109 "data_size": 7936 00:18:52.109 }, 00:18:52.109 { 00:18:52.109 "name": "BaseBdev2", 00:18:52.109 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:52.109 "is_configured": true, 00:18:52.109 "data_offset": 256, 00:18:52.109 "data_size": 7936 00:18:52.109 } 00:18:52.109 ] 00:18:52.109 }' 00:18:52.109 10:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.109 10:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.109 10:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:52.109 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=762 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.109 "name": "raid_bdev1", 00:18:52.109 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:52.109 "strip_size_kb": 0, 00:18:52.109 "state": "online", 00:18:52.109 "raid_level": "raid1", 00:18:52.109 "superblock": true, 00:18:52.109 "num_base_bdevs": 2, 00:18:52.109 "num_base_bdevs_discovered": 2, 00:18:52.109 "num_base_bdevs_operational": 2, 00:18:52.109 "process": { 00:18:52.109 "type": "rebuild", 00:18:52.109 "target": "spare", 00:18:52.109 "progress": { 00:18:52.109 "blocks": 2816, 00:18:52.109 "percent": 35 00:18:52.109 } 00:18:52.109 }, 00:18:52.109 "base_bdevs_list": [ 00:18:52.109 { 00:18:52.109 "name": "spare", 00:18:52.109 "uuid": "1a8e89bc-d857-5985-9f2e-b66a99b6205a", 00:18:52.109 "is_configured": true, 00:18:52.109 "data_offset": 256, 00:18:52.109 "data_size": 7936 00:18:52.109 }, 00:18:52.109 { 00:18:52.109 "name": "BaseBdev2", 00:18:52.109 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:52.109 "is_configured": true, 00:18:52.109 "data_offset": 256, 00:18:52.109 "data_size": 7936 00:18:52.109 } 00:18:52.109 ] 00:18:52.109 }' 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.109 10:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:53.481 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:53.481 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.481 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.481 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.481 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.481 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.481 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.481 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.481 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.481 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.481 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.482 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.482 "name": "raid_bdev1", 00:18:53.482 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:53.482 "strip_size_kb": 0, 00:18:53.482 "state": "online", 00:18:53.482 "raid_level": "raid1", 00:18:53.482 "superblock": true, 00:18:53.482 "num_base_bdevs": 2, 00:18:53.482 "num_base_bdevs_discovered": 2, 00:18:53.482 "num_base_bdevs_operational": 2, 00:18:53.482 "process": { 00:18:53.482 "type": "rebuild", 00:18:53.482 "target": "spare", 00:18:53.482 "progress": { 00:18:53.482 "blocks": 5888, 00:18:53.482 "percent": 74 00:18:53.482 } 00:18:53.482 }, 00:18:53.482 "base_bdevs_list": [ 00:18:53.482 { 00:18:53.482 "name": "spare", 00:18:53.482 "uuid": "1a8e89bc-d857-5985-9f2e-b66a99b6205a", 00:18:53.482 "is_configured": true, 00:18:53.482 "data_offset": 256, 00:18:53.482 "data_size": 7936 00:18:53.482 }, 00:18:53.482 { 00:18:53.482 "name": "BaseBdev2", 00:18:53.482 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:53.482 "is_configured": true, 00:18:53.482 "data_offset": 256, 00:18:53.482 "data_size": 7936 00:18:53.482 } 00:18:53.482 ] 00:18:53.482 }' 00:18:53.482 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.482 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.482 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.482 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.482 10:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:54.046 [2024-11-15 10:47:15.009207] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:54.046 [2024-11-15 10:47:15.009307] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:54.046 [2024-11-15 10:47:15.009472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.304 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:54.304 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.304 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.304 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.304 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.304 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.304 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.304 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.304 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.304 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.304 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.304 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.304 "name": "raid_bdev1", 00:18:54.304 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:54.304 "strip_size_kb": 0, 00:18:54.304 "state": "online", 00:18:54.304 "raid_level": "raid1", 00:18:54.304 "superblock": true, 00:18:54.304 "num_base_bdevs": 2, 00:18:54.304 "num_base_bdevs_discovered": 2, 00:18:54.304 "num_base_bdevs_operational": 2, 00:18:54.304 "base_bdevs_list": [ 00:18:54.304 { 00:18:54.304 "name": "spare", 00:18:54.304 "uuid": "1a8e89bc-d857-5985-9f2e-b66a99b6205a", 00:18:54.304 "is_configured": true, 00:18:54.304 "data_offset": 256, 00:18:54.304 "data_size": 7936 00:18:54.304 }, 00:18:54.304 { 00:18:54.304 "name": "BaseBdev2", 00:18:54.304 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:54.304 "is_configured": true, 00:18:54.304 "data_offset": 256, 00:18:54.304 "data_size": 7936 00:18:54.304 } 00:18:54.304 ] 00:18:54.304 }' 00:18:54.304 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.563 "name": "raid_bdev1", 00:18:54.563 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:54.563 "strip_size_kb": 0, 00:18:54.563 "state": "online", 00:18:54.563 "raid_level": "raid1", 00:18:54.563 "superblock": true, 00:18:54.563 "num_base_bdevs": 2, 00:18:54.563 "num_base_bdevs_discovered": 2, 00:18:54.563 "num_base_bdevs_operational": 2, 00:18:54.563 "base_bdevs_list": [ 00:18:54.563 { 00:18:54.563 "name": "spare", 00:18:54.563 "uuid": "1a8e89bc-d857-5985-9f2e-b66a99b6205a", 00:18:54.563 "is_configured": true, 00:18:54.563 "data_offset": 256, 00:18:54.563 "data_size": 7936 00:18:54.563 }, 00:18:54.563 { 00:18:54.563 "name": "BaseBdev2", 00:18:54.563 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:54.563 "is_configured": true, 00:18:54.563 "data_offset": 256, 00:18:54.563 "data_size": 7936 00:18:54.563 } 00:18:54.563 ] 00:18:54.563 }' 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.563 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.821 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.821 "name": "raid_bdev1", 00:18:54.821 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:54.821 "strip_size_kb": 0, 00:18:54.821 "state": "online", 00:18:54.821 "raid_level": "raid1", 00:18:54.821 "superblock": true, 00:18:54.821 "num_base_bdevs": 2, 00:18:54.821 "num_base_bdevs_discovered": 2, 00:18:54.821 "num_base_bdevs_operational": 2, 00:18:54.821 "base_bdevs_list": [ 00:18:54.821 { 00:18:54.821 "name": "spare", 00:18:54.821 "uuid": "1a8e89bc-d857-5985-9f2e-b66a99b6205a", 00:18:54.821 "is_configured": true, 00:18:54.821 "data_offset": 256, 00:18:54.821 "data_size": 7936 00:18:54.821 }, 00:18:54.821 { 00:18:54.821 "name": "BaseBdev2", 00:18:54.821 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:54.821 "is_configured": true, 00:18:54.821 "data_offset": 256, 00:18:54.821 "data_size": 7936 00:18:54.821 } 00:18:54.821 ] 00:18:54.821 }' 00:18:54.821 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.821 10:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.079 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:55.080 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.080 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.080 [2024-11-15 10:47:16.192658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.080 [2024-11-15 10:47:16.192709] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.080 [2024-11-15 10:47:16.192840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.080 [2024-11-15 10:47:16.192935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.080 [2024-11-15 10:47:16.192953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:55.080 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.080 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.080 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:55.080 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.080 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.080 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.338 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:55.338 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:55.338 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:55.338 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:55.338 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:55.338 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:55.338 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:55.338 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:55.338 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:55.338 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:55.338 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:55.338 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:55.338 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:55.596 /dev/nbd0 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:55.596 1+0 records in 00:18:55.596 1+0 records out 00:18:55.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00093245 s, 4.4 MB/s 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:55.596 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:55.854 /dev/nbd1 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:55.854 1+0 records in 00:18:55.854 1+0 records out 00:18:55.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00250474 s, 1.6 MB/s 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:55.854 10:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:56.113 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:56.113 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:56.113 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:56.113 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:56.113 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:56.113 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:56.113 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:56.371 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:56.371 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:56.371 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:56.371 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:56.371 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:56.371 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:56.371 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:56.371 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:56.371 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:56.371 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.630 [2024-11-15 10:47:17.705932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:56.630 [2024-11-15 10:47:17.706008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.630 [2024-11-15 10:47:17.706056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:56.630 [2024-11-15 10:47:17.706072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.630 [2024-11-15 10:47:17.708715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.630 [2024-11-15 10:47:17.708771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:56.630 [2024-11-15 10:47:17.708868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:56.630 [2024-11-15 10:47:17.708942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.630 [2024-11-15 10:47:17.709113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:56.630 spare 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.630 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.889 [2024-11-15 10:47:17.809224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:56.889 [2024-11-15 10:47:17.809446] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:56.889 [2024-11-15 10:47:17.809623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:56.889 [2024-11-15 10:47:17.809805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:56.889 [2024-11-15 10:47:17.809821] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:56.889 [2024-11-15 10:47:17.810003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.889 "name": "raid_bdev1", 00:18:56.889 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:56.889 "strip_size_kb": 0, 00:18:56.889 "state": "online", 00:18:56.889 "raid_level": "raid1", 00:18:56.889 "superblock": true, 00:18:56.889 "num_base_bdevs": 2, 00:18:56.889 "num_base_bdevs_discovered": 2, 00:18:56.889 "num_base_bdevs_operational": 2, 00:18:56.889 "base_bdevs_list": [ 00:18:56.889 { 00:18:56.889 "name": "spare", 00:18:56.889 "uuid": "1a8e89bc-d857-5985-9f2e-b66a99b6205a", 00:18:56.889 "is_configured": true, 00:18:56.889 "data_offset": 256, 00:18:56.889 "data_size": 7936 00:18:56.889 }, 00:18:56.889 { 00:18:56.889 "name": "BaseBdev2", 00:18:56.889 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:56.889 "is_configured": true, 00:18:56.889 "data_offset": 256, 00:18:56.889 "data_size": 7936 00:18:56.889 } 00:18:56.889 ] 00:18:56.889 }' 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.889 10:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.456 "name": "raid_bdev1", 00:18:57.456 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:57.456 "strip_size_kb": 0, 00:18:57.456 "state": "online", 00:18:57.456 "raid_level": "raid1", 00:18:57.456 "superblock": true, 00:18:57.456 "num_base_bdevs": 2, 00:18:57.456 "num_base_bdevs_discovered": 2, 00:18:57.456 "num_base_bdevs_operational": 2, 00:18:57.456 "base_bdevs_list": [ 00:18:57.456 { 00:18:57.456 "name": "spare", 00:18:57.456 "uuid": "1a8e89bc-d857-5985-9f2e-b66a99b6205a", 00:18:57.456 "is_configured": true, 00:18:57.456 "data_offset": 256, 00:18:57.456 "data_size": 7936 00:18:57.456 }, 00:18:57.456 { 00:18:57.456 "name": "BaseBdev2", 00:18:57.456 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:57.456 "is_configured": true, 00:18:57.456 "data_offset": 256, 00:18:57.456 "data_size": 7936 00:18:57.456 } 00:18:57.456 ] 00:18:57.456 }' 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.456 [2024-11-15 10:47:18.538265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.456 "name": "raid_bdev1", 00:18:57.456 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:57.456 "strip_size_kb": 0, 00:18:57.456 "state": "online", 00:18:57.456 "raid_level": "raid1", 00:18:57.456 "superblock": true, 00:18:57.456 "num_base_bdevs": 2, 00:18:57.456 "num_base_bdevs_discovered": 1, 00:18:57.456 "num_base_bdevs_operational": 1, 00:18:57.456 "base_bdevs_list": [ 00:18:57.456 { 00:18:57.456 "name": null, 00:18:57.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.456 "is_configured": false, 00:18:57.456 "data_offset": 0, 00:18:57.456 "data_size": 7936 00:18:57.456 }, 00:18:57.456 { 00:18:57.456 "name": "BaseBdev2", 00:18:57.456 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:57.456 "is_configured": true, 00:18:57.456 "data_offset": 256, 00:18:57.456 "data_size": 7936 00:18:57.456 } 00:18:57.456 ] 00:18:57.456 }' 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.456 10:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.081 10:47:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:58.081 10:47:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.081 10:47:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.081 [2024-11-15 10:47:19.030437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:58.081 [2024-11-15 10:47:19.030707] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:58.081 [2024-11-15 10:47:19.030735] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:58.081 [2024-11-15 10:47:19.030783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:58.081 [2024-11-15 10:47:19.043812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:58.081 10:47:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.081 10:47:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:58.081 [2024-11-15 10:47:19.046359] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:59.016 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.016 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.016 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.016 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.016 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.016 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.016 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.016 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.016 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.016 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.016 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.016 "name": "raid_bdev1", 00:18:59.016 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:59.016 "strip_size_kb": 0, 00:18:59.016 "state": "online", 00:18:59.016 "raid_level": "raid1", 00:18:59.016 "superblock": true, 00:18:59.016 "num_base_bdevs": 2, 00:18:59.016 "num_base_bdevs_discovered": 2, 00:18:59.016 "num_base_bdevs_operational": 2, 00:18:59.016 "process": { 00:18:59.016 "type": "rebuild", 00:18:59.016 "target": "spare", 00:18:59.016 "progress": { 00:18:59.016 "blocks": 2560, 00:18:59.016 "percent": 32 00:18:59.016 } 00:18:59.016 }, 00:18:59.016 "base_bdevs_list": [ 00:18:59.016 { 00:18:59.016 "name": "spare", 00:18:59.016 "uuid": "1a8e89bc-d857-5985-9f2e-b66a99b6205a", 00:18:59.016 "is_configured": true, 00:18:59.016 "data_offset": 256, 00:18:59.016 "data_size": 7936 00:18:59.016 }, 00:18:59.016 { 00:18:59.016 "name": "BaseBdev2", 00:18:59.016 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:59.016 "is_configured": true, 00:18:59.016 "data_offset": 256, 00:18:59.016 "data_size": 7936 00:18:59.016 } 00:18:59.016 ] 00:18:59.016 }' 00:18:59.016 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.016 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.016 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.274 [2024-11-15 10:47:20.204347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.274 [2024-11-15 10:47:20.255626] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:59.274 [2024-11-15 10:47:20.255849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.274 [2024-11-15 10:47:20.255878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.274 [2024-11-15 10:47:20.255906] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.274 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.274 "name": "raid_bdev1", 00:18:59.274 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:18:59.274 "strip_size_kb": 0, 00:18:59.274 "state": "online", 00:18:59.274 "raid_level": "raid1", 00:18:59.274 "superblock": true, 00:18:59.274 "num_base_bdevs": 2, 00:18:59.274 "num_base_bdevs_discovered": 1, 00:18:59.274 "num_base_bdevs_operational": 1, 00:18:59.274 "base_bdevs_list": [ 00:18:59.274 { 00:18:59.274 "name": null, 00:18:59.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.274 "is_configured": false, 00:18:59.274 "data_offset": 0, 00:18:59.274 "data_size": 7936 00:18:59.274 }, 00:18:59.274 { 00:18:59.274 "name": "BaseBdev2", 00:18:59.274 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:18:59.274 "is_configured": true, 00:18:59.274 "data_offset": 256, 00:18:59.274 "data_size": 7936 00:18:59.274 } 00:18:59.274 ] 00:18:59.274 }' 00:18:59.275 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.275 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.841 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:59.841 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.841 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.841 [2024-11-15 10:47:20.795101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:59.841 [2024-11-15 10:47:20.795198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.841 [2024-11-15 10:47:20.795249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:59.841 [2024-11-15 10:47:20.795268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.841 [2024-11-15 10:47:20.795750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.841 [2024-11-15 10:47:20.795830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:59.841 [2024-11-15 10:47:20.796048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:59.841 [2024-11-15 10:47:20.796115] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:59.841 [2024-11-15 10:47:20.796308] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:59.841 [2024-11-15 10:47:20.796365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:59.841 [2024-11-15 10:47:20.809834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:59.841 spare 00:18:59.841 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.841 10:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:59.841 [2024-11-15 10:47:20.812247] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:00.775 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:00.775 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.775 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:00.775 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:00.775 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.775 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.775 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.775 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.775 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.775 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.775 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.775 "name": "raid_bdev1", 00:19:00.775 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:19:00.775 "strip_size_kb": 0, 00:19:00.775 "state": "online", 00:19:00.775 "raid_level": "raid1", 00:19:00.775 "superblock": true, 00:19:00.775 "num_base_bdevs": 2, 00:19:00.775 "num_base_bdevs_discovered": 2, 00:19:00.775 "num_base_bdevs_operational": 2, 00:19:00.775 "process": { 00:19:00.775 "type": "rebuild", 00:19:00.775 "target": "spare", 00:19:00.775 "progress": { 00:19:00.775 "blocks": 2560, 00:19:00.775 "percent": 32 00:19:00.775 } 00:19:00.775 }, 00:19:00.775 "base_bdevs_list": [ 00:19:00.775 { 00:19:00.775 "name": "spare", 00:19:00.775 "uuid": "1a8e89bc-d857-5985-9f2e-b66a99b6205a", 00:19:00.775 "is_configured": true, 00:19:00.775 "data_offset": 256, 00:19:00.775 "data_size": 7936 00:19:00.775 }, 00:19:00.775 { 00:19:00.775 "name": "BaseBdev2", 00:19:00.775 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:19:00.775 "is_configured": true, 00:19:00.775 "data_offset": 256, 00:19:00.775 "data_size": 7936 00:19:00.775 } 00:19:00.775 ] 00:19:00.775 }' 00:19:00.775 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.775 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.775 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.033 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.033 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:01.033 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.033 10:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.033 [2024-11-15 10:47:21.978352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.033 [2024-11-15 10:47:22.021069] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:01.033 [2024-11-15 10:47:22.021310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.033 [2024-11-15 10:47:22.021346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.034 [2024-11-15 10:47:22.021368] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.034 "name": "raid_bdev1", 00:19:01.034 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:19:01.034 "strip_size_kb": 0, 00:19:01.034 "state": "online", 00:19:01.034 "raid_level": "raid1", 00:19:01.034 "superblock": true, 00:19:01.034 "num_base_bdevs": 2, 00:19:01.034 "num_base_bdevs_discovered": 1, 00:19:01.034 "num_base_bdevs_operational": 1, 00:19:01.034 "base_bdevs_list": [ 00:19:01.034 { 00:19:01.034 "name": null, 00:19:01.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.034 "is_configured": false, 00:19:01.034 "data_offset": 0, 00:19:01.034 "data_size": 7936 00:19:01.034 }, 00:19:01.034 { 00:19:01.034 "name": "BaseBdev2", 00:19:01.034 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:19:01.034 "is_configured": true, 00:19:01.034 "data_offset": 256, 00:19:01.034 "data_size": 7936 00:19:01.034 } 00:19:01.034 ] 00:19:01.034 }' 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.034 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.601 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.601 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.601 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.601 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.601 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.601 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.601 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.601 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.601 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.601 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.601 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.601 "name": "raid_bdev1", 00:19:01.601 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:19:01.601 "strip_size_kb": 0, 00:19:01.601 "state": "online", 00:19:01.601 "raid_level": "raid1", 00:19:01.601 "superblock": true, 00:19:01.601 "num_base_bdevs": 2, 00:19:01.601 "num_base_bdevs_discovered": 1, 00:19:01.601 "num_base_bdevs_operational": 1, 00:19:01.601 "base_bdevs_list": [ 00:19:01.601 { 00:19:01.601 "name": null, 00:19:01.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.602 "is_configured": false, 00:19:01.602 "data_offset": 0, 00:19:01.602 "data_size": 7936 00:19:01.602 }, 00:19:01.602 { 00:19:01.602 "name": "BaseBdev2", 00:19:01.602 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:19:01.602 "is_configured": true, 00:19:01.602 "data_offset": 256, 00:19:01.602 "data_size": 7936 00:19:01.602 } 00:19:01.602 ] 00:19:01.602 }' 00:19:01.602 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.602 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.602 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.602 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.602 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:01.602 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.602 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.602 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.602 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:01.602 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.602 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.602 [2024-11-15 10:47:22.708231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:01.602 [2024-11-15 10:47:22.708311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.602 [2024-11-15 10:47:22.708346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:01.602 [2024-11-15 10:47:22.708361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.602 [2024-11-15 10:47:22.708698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.602 [2024-11-15 10:47:22.708729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:01.602 [2024-11-15 10:47:22.708832] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:01.602 [2024-11-15 10:47:22.708861] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:01.602 [2024-11-15 10:47:22.708875] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:01.602 [2024-11-15 10:47:22.708889] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:01.602 BaseBdev1 00:19:01.602 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.602 10:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.979 "name": "raid_bdev1", 00:19:02.979 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:19:02.979 "strip_size_kb": 0, 00:19:02.979 "state": "online", 00:19:02.979 "raid_level": "raid1", 00:19:02.979 "superblock": true, 00:19:02.979 "num_base_bdevs": 2, 00:19:02.979 "num_base_bdevs_discovered": 1, 00:19:02.979 "num_base_bdevs_operational": 1, 00:19:02.979 "base_bdevs_list": [ 00:19:02.979 { 00:19:02.979 "name": null, 00:19:02.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.979 "is_configured": false, 00:19:02.979 "data_offset": 0, 00:19:02.979 "data_size": 7936 00:19:02.979 }, 00:19:02.979 { 00:19:02.979 "name": "BaseBdev2", 00:19:02.979 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:19:02.979 "is_configured": true, 00:19:02.979 "data_offset": 256, 00:19:02.979 "data_size": 7936 00:19:02.979 } 00:19:02.979 ] 00:19:02.979 }' 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.979 10:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.238 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:03.238 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.238 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:03.238 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:03.238 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.238 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.238 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.238 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.238 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.238 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.238 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.238 "name": "raid_bdev1", 00:19:03.238 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:19:03.238 "strip_size_kb": 0, 00:19:03.238 "state": "online", 00:19:03.238 "raid_level": "raid1", 00:19:03.238 "superblock": true, 00:19:03.238 "num_base_bdevs": 2, 00:19:03.238 "num_base_bdevs_discovered": 1, 00:19:03.238 "num_base_bdevs_operational": 1, 00:19:03.238 "base_bdevs_list": [ 00:19:03.238 { 00:19:03.238 "name": null, 00:19:03.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.238 "is_configured": false, 00:19:03.238 "data_offset": 0, 00:19:03.238 "data_size": 7936 00:19:03.238 }, 00:19:03.238 { 00:19:03.238 "name": "BaseBdev2", 00:19:03.238 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:19:03.238 "is_configured": true, 00:19:03.238 "data_offset": 256, 00:19:03.238 "data_size": 7936 00:19:03.238 } 00:19:03.238 ] 00:19:03.238 }' 00:19:03.238 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.238 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.238 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.496 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.497 [2024-11-15 10:47:24.412938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.497 [2024-11-15 10:47:24.413188] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:03.497 [2024-11-15 10:47:24.413214] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:03.497 request: 00:19:03.497 { 00:19:03.497 "base_bdev": "BaseBdev1", 00:19:03.497 "raid_bdev": "raid_bdev1", 00:19:03.497 "method": "bdev_raid_add_base_bdev", 00:19:03.497 "req_id": 1 00:19:03.497 } 00:19:03.497 Got JSON-RPC error response 00:19:03.497 response: 00:19:03.497 { 00:19:03.497 "code": -22, 00:19:03.497 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:03.497 } 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.497 10:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.448 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.448 "name": "raid_bdev1", 00:19:04.448 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:19:04.448 "strip_size_kb": 0, 00:19:04.448 "state": "online", 00:19:04.448 "raid_level": "raid1", 00:19:04.448 "superblock": true, 00:19:04.449 "num_base_bdevs": 2, 00:19:04.449 "num_base_bdevs_discovered": 1, 00:19:04.449 "num_base_bdevs_operational": 1, 00:19:04.449 "base_bdevs_list": [ 00:19:04.449 { 00:19:04.449 "name": null, 00:19:04.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.449 "is_configured": false, 00:19:04.449 "data_offset": 0, 00:19:04.449 "data_size": 7936 00:19:04.449 }, 00:19:04.449 { 00:19:04.449 "name": "BaseBdev2", 00:19:04.449 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:19:04.449 "is_configured": true, 00:19:04.449 "data_offset": 256, 00:19:04.449 "data_size": 7936 00:19:04.449 } 00:19:04.449 ] 00:19:04.449 }' 00:19:04.449 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.449 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.017 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:05.017 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.017 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:05.017 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:05.017 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.017 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.017 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.017 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.017 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.017 10:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.017 "name": "raid_bdev1", 00:19:05.017 "uuid": "ac90cae8-e052-4218-ae24-d00fa46947e7", 00:19:05.017 "strip_size_kb": 0, 00:19:05.017 "state": "online", 00:19:05.017 "raid_level": "raid1", 00:19:05.017 "superblock": true, 00:19:05.017 "num_base_bdevs": 2, 00:19:05.017 "num_base_bdevs_discovered": 1, 00:19:05.017 "num_base_bdevs_operational": 1, 00:19:05.017 "base_bdevs_list": [ 00:19:05.017 { 00:19:05.017 "name": null, 00:19:05.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.017 "is_configured": false, 00:19:05.017 "data_offset": 0, 00:19:05.017 "data_size": 7936 00:19:05.017 }, 00:19:05.017 { 00:19:05.017 "name": "BaseBdev2", 00:19:05.017 "uuid": "6541b59e-a43d-5922-9d18-98939c7e37d0", 00:19:05.017 "is_configured": true, 00:19:05.017 "data_offset": 256, 00:19:05.017 "data_size": 7936 00:19:05.017 } 00:19:05.017 ] 00:19:05.017 }' 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88166 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88166 ']' 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88166 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88166 00:19:05.017 killing process with pid 88166 00:19:05.017 Received shutdown signal, test time was about 60.000000 seconds 00:19:05.017 00:19:05.017 Latency(us) 00:19:05.017 [2024-11-15T10:47:26.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.017 [2024-11-15T10:47:26.179Z] =================================================================================================================== 00:19:05.017 [2024-11-15T10:47:26.179Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88166' 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88166 00:19:05.017 [2024-11-15 10:47:26.167822] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:05.017 10:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88166 00:19:05.017 [2024-11-15 10:47:26.168018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.017 [2024-11-15 10:47:26.168085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.017 [2024-11-15 10:47:26.168105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:05.585 [2024-11-15 10:47:26.460308] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:06.522 10:47:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:19:06.522 00:19:06.522 real 0m21.554s 00:19:06.522 user 0m29.236s 00:19:06.522 sys 0m2.443s 00:19:06.522 10:47:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.522 ************************************ 00:19:06.522 END TEST raid_rebuild_test_sb_md_separate 00:19:06.522 ************************************ 00:19:06.522 10:47:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.522 10:47:27 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:19:06.522 10:47:27 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:19:06.522 10:47:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:06.522 10:47:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.522 10:47:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.522 ************************************ 00:19:06.522 START TEST raid_state_function_test_sb_md_interleaved 00:19:06.522 ************************************ 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88868 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:06.522 Process raid pid: 88868 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88868' 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88868 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88868 ']' 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.522 10:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.522 [2024-11-15 10:47:27.677922] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:19:06.522 [2024-11-15 10:47:27.678103] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.781 [2024-11-15 10:47:27.867048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.040 [2024-11-15 10:47:28.003760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.298 [2024-11-15 10:47:28.214817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.298 [2024-11-15 10:47:28.214898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.557 [2024-11-15 10:47:28.642553] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:07.557 [2024-11-15 10:47:28.642658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:07.557 [2024-11-15 10:47:28.642687] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:07.557 [2024-11-15 10:47:28.642719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.557 "name": "Existed_Raid", 00:19:07.557 "uuid": "212770e3-011d-4a0c-b656-48759ffdc922", 00:19:07.557 "strip_size_kb": 0, 00:19:07.557 "state": "configuring", 00:19:07.557 "raid_level": "raid1", 00:19:07.557 "superblock": true, 00:19:07.557 "num_base_bdevs": 2, 00:19:07.557 "num_base_bdevs_discovered": 0, 00:19:07.557 "num_base_bdevs_operational": 2, 00:19:07.557 "base_bdevs_list": [ 00:19:07.557 { 00:19:07.557 "name": "BaseBdev1", 00:19:07.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.557 "is_configured": false, 00:19:07.557 "data_offset": 0, 00:19:07.557 "data_size": 0 00:19:07.557 }, 00:19:07.557 { 00:19:07.557 "name": "BaseBdev2", 00:19:07.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.557 "is_configured": false, 00:19:07.557 "data_offset": 0, 00:19:07.557 "data_size": 0 00:19:07.557 } 00:19:07.557 ] 00:19:07.557 }' 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.557 10:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.124 [2024-11-15 10:47:29.166731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:08.124 [2024-11-15 10:47:29.166785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.124 [2024-11-15 10:47:29.174730] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:08.124 [2024-11-15 10:47:29.174796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:08.124 [2024-11-15 10:47:29.174823] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:08.124 [2024-11-15 10:47:29.174857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.124 [2024-11-15 10:47:29.221279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:08.124 BaseBdev1 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:08.124 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.125 [ 00:19:08.125 { 00:19:08.125 "name": "BaseBdev1", 00:19:08.125 "aliases": [ 00:19:08.125 "452dc040-a297-49d2-8feb-0e221f9e7d87" 00:19:08.125 ], 00:19:08.125 "product_name": "Malloc disk", 00:19:08.125 "block_size": 4128, 00:19:08.125 "num_blocks": 8192, 00:19:08.125 "uuid": "452dc040-a297-49d2-8feb-0e221f9e7d87", 00:19:08.125 "md_size": 32, 00:19:08.125 "md_interleave": true, 00:19:08.125 "dif_type": 0, 00:19:08.125 "assigned_rate_limits": { 00:19:08.125 "rw_ios_per_sec": 0, 00:19:08.125 "rw_mbytes_per_sec": 0, 00:19:08.125 "r_mbytes_per_sec": 0, 00:19:08.125 "w_mbytes_per_sec": 0 00:19:08.125 }, 00:19:08.125 "claimed": true, 00:19:08.125 "claim_type": "exclusive_write", 00:19:08.125 "zoned": false, 00:19:08.125 "supported_io_types": { 00:19:08.125 "read": true, 00:19:08.125 "write": true, 00:19:08.125 "unmap": true, 00:19:08.125 "flush": true, 00:19:08.125 "reset": true, 00:19:08.125 "nvme_admin": false, 00:19:08.125 "nvme_io": false, 00:19:08.125 "nvme_io_md": false, 00:19:08.125 "write_zeroes": true, 00:19:08.125 "zcopy": true, 00:19:08.125 "get_zone_info": false, 00:19:08.125 "zone_management": false, 00:19:08.125 "zone_append": false, 00:19:08.125 "compare": false, 00:19:08.125 "compare_and_write": false, 00:19:08.125 "abort": true, 00:19:08.125 "seek_hole": false, 00:19:08.125 "seek_data": false, 00:19:08.125 "copy": true, 00:19:08.125 "nvme_iov_md": false 00:19:08.125 }, 00:19:08.125 "memory_domains": [ 00:19:08.125 { 00:19:08.125 "dma_device_id": "system", 00:19:08.125 "dma_device_type": 1 00:19:08.125 }, 00:19:08.125 { 00:19:08.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.125 "dma_device_type": 2 00:19:08.125 } 00:19:08.125 ], 00:19:08.125 "driver_specific": {} 00:19:08.125 } 00:19:08.125 ] 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.125 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.383 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.383 "name": "Existed_Raid", 00:19:08.383 "uuid": "10d4b168-c0cd-40d1-9753-77b3f1dadb61", 00:19:08.383 "strip_size_kb": 0, 00:19:08.383 "state": "configuring", 00:19:08.383 "raid_level": "raid1", 00:19:08.383 "superblock": true, 00:19:08.383 "num_base_bdevs": 2, 00:19:08.383 "num_base_bdevs_discovered": 1, 00:19:08.383 "num_base_bdevs_operational": 2, 00:19:08.383 "base_bdevs_list": [ 00:19:08.383 { 00:19:08.383 "name": "BaseBdev1", 00:19:08.383 "uuid": "452dc040-a297-49d2-8feb-0e221f9e7d87", 00:19:08.383 "is_configured": true, 00:19:08.383 "data_offset": 256, 00:19:08.384 "data_size": 7936 00:19:08.384 }, 00:19:08.384 { 00:19:08.384 "name": "BaseBdev2", 00:19:08.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.384 "is_configured": false, 00:19:08.384 "data_offset": 0, 00:19:08.384 "data_size": 0 00:19:08.384 } 00:19:08.384 ] 00:19:08.384 }' 00:19:08.384 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.384 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.721 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:08.721 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.721 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.721 [2024-11-15 10:47:29.749548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:08.721 [2024-11-15 10:47:29.749628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:08.721 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.721 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:08.721 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.721 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.721 [2024-11-15 10:47:29.757613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:08.721 [2024-11-15 10:47:29.760147] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:08.721 [2024-11-15 10:47:29.760214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:08.721 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.721 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.722 "name": "Existed_Raid", 00:19:08.722 "uuid": "f5e39455-35c3-4a74-9cf4-a5c210e726bf", 00:19:08.722 "strip_size_kb": 0, 00:19:08.722 "state": "configuring", 00:19:08.722 "raid_level": "raid1", 00:19:08.722 "superblock": true, 00:19:08.722 "num_base_bdevs": 2, 00:19:08.722 "num_base_bdevs_discovered": 1, 00:19:08.722 "num_base_bdevs_operational": 2, 00:19:08.722 "base_bdevs_list": [ 00:19:08.722 { 00:19:08.722 "name": "BaseBdev1", 00:19:08.722 "uuid": "452dc040-a297-49d2-8feb-0e221f9e7d87", 00:19:08.722 "is_configured": true, 00:19:08.722 "data_offset": 256, 00:19:08.722 "data_size": 7936 00:19:08.722 }, 00:19:08.722 { 00:19:08.722 "name": "BaseBdev2", 00:19:08.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.722 "is_configured": false, 00:19:08.722 "data_offset": 0, 00:19:08.722 "data_size": 0 00:19:08.722 } 00:19:08.722 ] 00:19:08.722 }' 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.722 10:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.289 [2024-11-15 10:47:30.314808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:09.289 [2024-11-15 10:47:30.315080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:09.289 [2024-11-15 10:47:30.315102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:09.289 [2024-11-15 10:47:30.315207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:09.289 [2024-11-15 10:47:30.315321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:09.289 [2024-11-15 10:47:30.315343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:09.289 [2024-11-15 10:47:30.315426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.289 BaseBdev2 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.289 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.289 [ 00:19:09.289 { 00:19:09.289 "name": "BaseBdev2", 00:19:09.289 "aliases": [ 00:19:09.289 "9304db05-0f25-4403-8a28-82064e38d43b" 00:19:09.289 ], 00:19:09.289 "product_name": "Malloc disk", 00:19:09.289 "block_size": 4128, 00:19:09.289 "num_blocks": 8192, 00:19:09.289 "uuid": "9304db05-0f25-4403-8a28-82064e38d43b", 00:19:09.289 "md_size": 32, 00:19:09.289 "md_interleave": true, 00:19:09.289 "dif_type": 0, 00:19:09.289 "assigned_rate_limits": { 00:19:09.290 "rw_ios_per_sec": 0, 00:19:09.290 "rw_mbytes_per_sec": 0, 00:19:09.290 "r_mbytes_per_sec": 0, 00:19:09.290 "w_mbytes_per_sec": 0 00:19:09.290 }, 00:19:09.290 "claimed": true, 00:19:09.290 "claim_type": "exclusive_write", 00:19:09.290 "zoned": false, 00:19:09.290 "supported_io_types": { 00:19:09.290 "read": true, 00:19:09.290 "write": true, 00:19:09.290 "unmap": true, 00:19:09.290 "flush": true, 00:19:09.290 "reset": true, 00:19:09.290 "nvme_admin": false, 00:19:09.290 "nvme_io": false, 00:19:09.290 "nvme_io_md": false, 00:19:09.290 "write_zeroes": true, 00:19:09.290 "zcopy": true, 00:19:09.290 "get_zone_info": false, 00:19:09.290 "zone_management": false, 00:19:09.290 "zone_append": false, 00:19:09.290 "compare": false, 00:19:09.290 "compare_and_write": false, 00:19:09.290 "abort": true, 00:19:09.290 "seek_hole": false, 00:19:09.290 "seek_data": false, 00:19:09.290 "copy": true, 00:19:09.290 "nvme_iov_md": false 00:19:09.290 }, 00:19:09.290 "memory_domains": [ 00:19:09.290 { 00:19:09.290 "dma_device_id": "system", 00:19:09.290 "dma_device_type": 1 00:19:09.290 }, 00:19:09.290 { 00:19:09.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.290 "dma_device_type": 2 00:19:09.290 } 00:19:09.290 ], 00:19:09.290 "driver_specific": {} 00:19:09.290 } 00:19:09.290 ] 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.290 "name": "Existed_Raid", 00:19:09.290 "uuid": "f5e39455-35c3-4a74-9cf4-a5c210e726bf", 00:19:09.290 "strip_size_kb": 0, 00:19:09.290 "state": "online", 00:19:09.290 "raid_level": "raid1", 00:19:09.290 "superblock": true, 00:19:09.290 "num_base_bdevs": 2, 00:19:09.290 "num_base_bdevs_discovered": 2, 00:19:09.290 "num_base_bdevs_operational": 2, 00:19:09.290 "base_bdevs_list": [ 00:19:09.290 { 00:19:09.290 "name": "BaseBdev1", 00:19:09.290 "uuid": "452dc040-a297-49d2-8feb-0e221f9e7d87", 00:19:09.290 "is_configured": true, 00:19:09.290 "data_offset": 256, 00:19:09.290 "data_size": 7936 00:19:09.290 }, 00:19:09.290 { 00:19:09.290 "name": "BaseBdev2", 00:19:09.290 "uuid": "9304db05-0f25-4403-8a28-82064e38d43b", 00:19:09.290 "is_configured": true, 00:19:09.290 "data_offset": 256, 00:19:09.290 "data_size": 7936 00:19:09.290 } 00:19:09.290 ] 00:19:09.290 }' 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.290 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:09.857 [2024-11-15 10:47:30.855427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:09.857 "name": "Existed_Raid", 00:19:09.857 "aliases": [ 00:19:09.857 "f5e39455-35c3-4a74-9cf4-a5c210e726bf" 00:19:09.857 ], 00:19:09.857 "product_name": "Raid Volume", 00:19:09.857 "block_size": 4128, 00:19:09.857 "num_blocks": 7936, 00:19:09.857 "uuid": "f5e39455-35c3-4a74-9cf4-a5c210e726bf", 00:19:09.857 "md_size": 32, 00:19:09.857 "md_interleave": true, 00:19:09.857 "dif_type": 0, 00:19:09.857 "assigned_rate_limits": { 00:19:09.857 "rw_ios_per_sec": 0, 00:19:09.857 "rw_mbytes_per_sec": 0, 00:19:09.857 "r_mbytes_per_sec": 0, 00:19:09.857 "w_mbytes_per_sec": 0 00:19:09.857 }, 00:19:09.857 "claimed": false, 00:19:09.857 "zoned": false, 00:19:09.857 "supported_io_types": { 00:19:09.857 "read": true, 00:19:09.857 "write": true, 00:19:09.857 "unmap": false, 00:19:09.857 "flush": false, 00:19:09.857 "reset": true, 00:19:09.857 "nvme_admin": false, 00:19:09.857 "nvme_io": false, 00:19:09.857 "nvme_io_md": false, 00:19:09.857 "write_zeroes": true, 00:19:09.857 "zcopy": false, 00:19:09.857 "get_zone_info": false, 00:19:09.857 "zone_management": false, 00:19:09.857 "zone_append": false, 00:19:09.857 "compare": false, 00:19:09.857 "compare_and_write": false, 00:19:09.857 "abort": false, 00:19:09.857 "seek_hole": false, 00:19:09.857 "seek_data": false, 00:19:09.857 "copy": false, 00:19:09.857 "nvme_iov_md": false 00:19:09.857 }, 00:19:09.857 "memory_domains": [ 00:19:09.857 { 00:19:09.857 "dma_device_id": "system", 00:19:09.857 "dma_device_type": 1 00:19:09.857 }, 00:19:09.857 { 00:19:09.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.857 "dma_device_type": 2 00:19:09.857 }, 00:19:09.857 { 00:19:09.857 "dma_device_id": "system", 00:19:09.857 "dma_device_type": 1 00:19:09.857 }, 00:19:09.857 { 00:19:09.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.857 "dma_device_type": 2 00:19:09.857 } 00:19:09.857 ], 00:19:09.857 "driver_specific": { 00:19:09.857 "raid": { 00:19:09.857 "uuid": "f5e39455-35c3-4a74-9cf4-a5c210e726bf", 00:19:09.857 "strip_size_kb": 0, 00:19:09.857 "state": "online", 00:19:09.857 "raid_level": "raid1", 00:19:09.857 "superblock": true, 00:19:09.857 "num_base_bdevs": 2, 00:19:09.857 "num_base_bdevs_discovered": 2, 00:19:09.857 "num_base_bdevs_operational": 2, 00:19:09.857 "base_bdevs_list": [ 00:19:09.857 { 00:19:09.857 "name": "BaseBdev1", 00:19:09.857 "uuid": "452dc040-a297-49d2-8feb-0e221f9e7d87", 00:19:09.857 "is_configured": true, 00:19:09.857 "data_offset": 256, 00:19:09.857 "data_size": 7936 00:19:09.857 }, 00:19:09.857 { 00:19:09.857 "name": "BaseBdev2", 00:19:09.857 "uuid": "9304db05-0f25-4403-8a28-82064e38d43b", 00:19:09.857 "is_configured": true, 00:19:09.857 "data_offset": 256, 00:19:09.857 "data_size": 7936 00:19:09.857 } 00:19:09.857 ] 00:19:09.857 } 00:19:09.857 } 00:19:09.857 }' 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:09.857 BaseBdev2' 00:19:09.857 10:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.857 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:09.857 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:09.857 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:09.857 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.857 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.857 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.117 [2024-11-15 10:47:31.115132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.117 "name": "Existed_Raid", 00:19:10.117 "uuid": "f5e39455-35c3-4a74-9cf4-a5c210e726bf", 00:19:10.117 "strip_size_kb": 0, 00:19:10.117 "state": "online", 00:19:10.117 "raid_level": "raid1", 00:19:10.117 "superblock": true, 00:19:10.117 "num_base_bdevs": 2, 00:19:10.117 "num_base_bdevs_discovered": 1, 00:19:10.117 "num_base_bdevs_operational": 1, 00:19:10.117 "base_bdevs_list": [ 00:19:10.117 { 00:19:10.117 "name": null, 00:19:10.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.117 "is_configured": false, 00:19:10.117 "data_offset": 0, 00:19:10.117 "data_size": 7936 00:19:10.117 }, 00:19:10.117 { 00:19:10.117 "name": "BaseBdev2", 00:19:10.117 "uuid": "9304db05-0f25-4403-8a28-82064e38d43b", 00:19:10.117 "is_configured": true, 00:19:10.117 "data_offset": 256, 00:19:10.117 "data_size": 7936 00:19:10.117 } 00:19:10.117 ] 00:19:10.117 }' 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.117 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.684 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:10.684 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:10.684 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.684 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.684 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.684 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:10.684 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.684 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:10.684 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:10.684 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:10.684 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.684 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.684 [2024-11-15 10:47:31.809457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:10.684 [2024-11-15 10:47:31.809647] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:10.942 [2024-11-15 10:47:31.899951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.942 [2024-11-15 10:47:31.900028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.942 [2024-11-15 10:47:31.900048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88868 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88868 ']' 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88868 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88868 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:10.942 killing process with pid 88868 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88868' 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88868 00:19:10.942 [2024-11-15 10:47:31.987876] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:10.942 10:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88868 00:19:10.942 [2024-11-15 10:47:32.004010] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:12.318 10:47:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:12.318 00:19:12.318 real 0m5.511s 00:19:12.318 user 0m8.295s 00:19:12.318 sys 0m0.810s 00:19:12.318 10:47:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.318 ************************************ 00:19:12.318 END TEST raid_state_function_test_sb_md_interleaved 00:19:12.318 ************************************ 00:19:12.318 10:47:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.318 10:47:33 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:12.318 10:47:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:12.318 10:47:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.318 10:47:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.318 ************************************ 00:19:12.318 START TEST raid_superblock_test_md_interleaved 00:19:12.318 ************************************ 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89126 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89126 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89126 ']' 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.318 10:47:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.318 [2024-11-15 10:47:33.230025] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:19:12.318 [2024-11-15 10:47:33.230205] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89126 ] 00:19:12.318 [2024-11-15 10:47:33.404905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.576 [2024-11-15 10:47:33.529025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.576 [2024-11-15 10:47:33.722577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.576 [2024-11-15 10:47:33.722656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.144 malloc1 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.144 [2024-11-15 10:47:34.232916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:13.144 [2024-11-15 10:47:34.233041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.144 [2024-11-15 10:47:34.233089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:13.144 [2024-11-15 10:47:34.233105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.144 [2024-11-15 10:47:34.235770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.144 [2024-11-15 10:47:34.235840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:13.144 pt1 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.144 malloc2 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.144 [2024-11-15 10:47:34.286389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:13.144 [2024-11-15 10:47:34.286469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.144 [2024-11-15 10:47:34.286515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:13.144 [2024-11-15 10:47:34.286543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.144 [2024-11-15 10:47:34.289212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.144 [2024-11-15 10:47:34.289269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:13.144 pt2 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:13.144 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:13.145 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.145 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.145 [2024-11-15 10:47:34.298451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:13.145 [2024-11-15 10:47:34.301292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:13.145 [2024-11-15 10:47:34.301589] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:13.145 [2024-11-15 10:47:34.301609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:13.145 [2024-11-15 10:47:34.301702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:13.145 [2024-11-15 10:47:34.301801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:13.145 [2024-11-15 10:47:34.301820] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:13.145 [2024-11-15 10:47:34.301919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.403 "name": "raid_bdev1", 00:19:13.403 "uuid": "69f122f3-00bf-4b1f-9c24-d1d6a078ce5c", 00:19:13.403 "strip_size_kb": 0, 00:19:13.403 "state": "online", 00:19:13.403 "raid_level": "raid1", 00:19:13.403 "superblock": true, 00:19:13.403 "num_base_bdevs": 2, 00:19:13.403 "num_base_bdevs_discovered": 2, 00:19:13.403 "num_base_bdevs_operational": 2, 00:19:13.403 "base_bdevs_list": [ 00:19:13.403 { 00:19:13.403 "name": "pt1", 00:19:13.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:13.403 "is_configured": true, 00:19:13.403 "data_offset": 256, 00:19:13.403 "data_size": 7936 00:19:13.403 }, 00:19:13.403 { 00:19:13.403 "name": "pt2", 00:19:13.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:13.403 "is_configured": true, 00:19:13.403 "data_offset": 256, 00:19:13.403 "data_size": 7936 00:19:13.403 } 00:19:13.403 ] 00:19:13.403 }' 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.403 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.662 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:13.662 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:13.662 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:13.662 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:13.662 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:13.662 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:13.662 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:13.662 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:13.662 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.662 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.662 [2024-11-15 10:47:34.798954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:13.662 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.921 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:13.921 "name": "raid_bdev1", 00:19:13.921 "aliases": [ 00:19:13.921 "69f122f3-00bf-4b1f-9c24-d1d6a078ce5c" 00:19:13.921 ], 00:19:13.921 "product_name": "Raid Volume", 00:19:13.921 "block_size": 4128, 00:19:13.921 "num_blocks": 7936, 00:19:13.921 "uuid": "69f122f3-00bf-4b1f-9c24-d1d6a078ce5c", 00:19:13.921 "md_size": 32, 00:19:13.921 "md_interleave": true, 00:19:13.921 "dif_type": 0, 00:19:13.921 "assigned_rate_limits": { 00:19:13.921 "rw_ios_per_sec": 0, 00:19:13.921 "rw_mbytes_per_sec": 0, 00:19:13.921 "r_mbytes_per_sec": 0, 00:19:13.921 "w_mbytes_per_sec": 0 00:19:13.921 }, 00:19:13.921 "claimed": false, 00:19:13.921 "zoned": false, 00:19:13.921 "supported_io_types": { 00:19:13.921 "read": true, 00:19:13.921 "write": true, 00:19:13.921 "unmap": false, 00:19:13.921 "flush": false, 00:19:13.921 "reset": true, 00:19:13.921 "nvme_admin": false, 00:19:13.921 "nvme_io": false, 00:19:13.921 "nvme_io_md": false, 00:19:13.921 "write_zeroes": true, 00:19:13.921 "zcopy": false, 00:19:13.921 "get_zone_info": false, 00:19:13.921 "zone_management": false, 00:19:13.921 "zone_append": false, 00:19:13.921 "compare": false, 00:19:13.921 "compare_and_write": false, 00:19:13.921 "abort": false, 00:19:13.921 "seek_hole": false, 00:19:13.921 "seek_data": false, 00:19:13.921 "copy": false, 00:19:13.921 "nvme_iov_md": false 00:19:13.921 }, 00:19:13.921 "memory_domains": [ 00:19:13.921 { 00:19:13.921 "dma_device_id": "system", 00:19:13.921 "dma_device_type": 1 00:19:13.921 }, 00:19:13.921 { 00:19:13.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.921 "dma_device_type": 2 00:19:13.921 }, 00:19:13.921 { 00:19:13.921 "dma_device_id": "system", 00:19:13.921 "dma_device_type": 1 00:19:13.921 }, 00:19:13.921 { 00:19:13.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.921 "dma_device_type": 2 00:19:13.921 } 00:19:13.921 ], 00:19:13.921 "driver_specific": { 00:19:13.921 "raid": { 00:19:13.921 "uuid": "69f122f3-00bf-4b1f-9c24-d1d6a078ce5c", 00:19:13.921 "strip_size_kb": 0, 00:19:13.921 "state": "online", 00:19:13.921 "raid_level": "raid1", 00:19:13.921 "superblock": true, 00:19:13.921 "num_base_bdevs": 2, 00:19:13.921 "num_base_bdevs_discovered": 2, 00:19:13.921 "num_base_bdevs_operational": 2, 00:19:13.921 "base_bdevs_list": [ 00:19:13.921 { 00:19:13.921 "name": "pt1", 00:19:13.921 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:13.921 "is_configured": true, 00:19:13.921 "data_offset": 256, 00:19:13.921 "data_size": 7936 00:19:13.921 }, 00:19:13.921 { 00:19:13.921 "name": "pt2", 00:19:13.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:13.922 "is_configured": true, 00:19:13.922 "data_offset": 256, 00:19:13.922 "data_size": 7936 00:19:13.922 } 00:19:13.922 ] 00:19:13.922 } 00:19:13.922 } 00:19:13.922 }' 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:13.922 pt2' 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.922 10:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.922 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.922 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:13.922 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:13.922 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:13.922 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:13.922 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.922 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.922 [2024-11-15 10:47:35.046983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:13.922 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=69f122f3-00bf-4b1f-9c24-d1d6a078ce5c 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 69f122f3-00bf-4b1f-9c24-d1d6a078ce5c ']' 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.182 [2024-11-15 10:47:35.098638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:14.182 [2024-11-15 10:47:35.098669] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:14.182 [2024-11-15 10:47:35.098770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:14.182 [2024-11-15 10:47:35.098846] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:14.182 [2024-11-15 10:47:35.098872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:14.182 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.183 [2024-11-15 10:47:35.234697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:14.183 [2024-11-15 10:47:35.237327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:14.183 [2024-11-15 10:47:35.237449] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:14.183 [2024-11-15 10:47:35.237571] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:14.183 [2024-11-15 10:47:35.237600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:14.183 [2024-11-15 10:47:35.237616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:14.183 request: 00:19:14.183 { 00:19:14.183 "name": "raid_bdev1", 00:19:14.183 "raid_level": "raid1", 00:19:14.183 "base_bdevs": [ 00:19:14.183 "malloc1", 00:19:14.183 "malloc2" 00:19:14.183 ], 00:19:14.183 "superblock": false, 00:19:14.183 "method": "bdev_raid_create", 00:19:14.183 "req_id": 1 00:19:14.183 } 00:19:14.183 Got JSON-RPC error response 00:19:14.183 response: 00:19:14.183 { 00:19:14.183 "code": -17, 00:19:14.183 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:14.183 } 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.183 [2024-11-15 10:47:35.294683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:14.183 [2024-11-15 10:47:35.294748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.183 [2024-11-15 10:47:35.294772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:14.183 [2024-11-15 10:47:35.294789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.183 [2024-11-15 10:47:35.297428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.183 [2024-11-15 10:47:35.297485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:14.183 [2024-11-15 10:47:35.297566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:14.183 [2024-11-15 10:47:35.297643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:14.183 pt1 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.183 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.441 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.441 "name": "raid_bdev1", 00:19:14.441 "uuid": "69f122f3-00bf-4b1f-9c24-d1d6a078ce5c", 00:19:14.441 "strip_size_kb": 0, 00:19:14.441 "state": "configuring", 00:19:14.441 "raid_level": "raid1", 00:19:14.441 "superblock": true, 00:19:14.441 "num_base_bdevs": 2, 00:19:14.441 "num_base_bdevs_discovered": 1, 00:19:14.441 "num_base_bdevs_operational": 2, 00:19:14.441 "base_bdevs_list": [ 00:19:14.441 { 00:19:14.441 "name": "pt1", 00:19:14.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:14.441 "is_configured": true, 00:19:14.441 "data_offset": 256, 00:19:14.441 "data_size": 7936 00:19:14.441 }, 00:19:14.441 { 00:19:14.441 "name": null, 00:19:14.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:14.441 "is_configured": false, 00:19:14.441 "data_offset": 256, 00:19:14.441 "data_size": 7936 00:19:14.441 } 00:19:14.441 ] 00:19:14.441 }' 00:19:14.441 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.441 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.700 [2024-11-15 10:47:35.826918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:14.700 [2024-11-15 10:47:35.827073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.700 [2024-11-15 10:47:35.827104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:14.700 [2024-11-15 10:47:35.827120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.700 [2024-11-15 10:47:35.827369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.700 [2024-11-15 10:47:35.827396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:14.700 [2024-11-15 10:47:35.827458] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:14.700 [2024-11-15 10:47:35.827493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:14.700 [2024-11-15 10:47:35.827659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:14.700 [2024-11-15 10:47:35.827681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:14.700 [2024-11-15 10:47:35.827769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:14.700 [2024-11-15 10:47:35.827867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:14.700 [2024-11-15 10:47:35.827883] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:14.700 [2024-11-15 10:47:35.828029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.700 pt2 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.700 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.958 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.958 "name": "raid_bdev1", 00:19:14.958 "uuid": "69f122f3-00bf-4b1f-9c24-d1d6a078ce5c", 00:19:14.958 "strip_size_kb": 0, 00:19:14.958 "state": "online", 00:19:14.958 "raid_level": "raid1", 00:19:14.958 "superblock": true, 00:19:14.958 "num_base_bdevs": 2, 00:19:14.958 "num_base_bdevs_discovered": 2, 00:19:14.958 "num_base_bdevs_operational": 2, 00:19:14.958 "base_bdevs_list": [ 00:19:14.958 { 00:19:14.958 "name": "pt1", 00:19:14.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:14.958 "is_configured": true, 00:19:14.958 "data_offset": 256, 00:19:14.958 "data_size": 7936 00:19:14.958 }, 00:19:14.958 { 00:19:14.958 "name": "pt2", 00:19:14.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:14.958 "is_configured": true, 00:19:14.958 "data_offset": 256, 00:19:14.958 "data_size": 7936 00:19:14.958 } 00:19:14.958 ] 00:19:14.958 }' 00:19:14.958 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.958 10:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.217 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:15.217 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:15.217 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:15.217 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:15.217 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:15.217 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:15.217 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:15.217 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.217 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:15.217 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.217 [2024-11-15 10:47:36.363380] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:15.475 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.475 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:15.475 "name": "raid_bdev1", 00:19:15.475 "aliases": [ 00:19:15.475 "69f122f3-00bf-4b1f-9c24-d1d6a078ce5c" 00:19:15.475 ], 00:19:15.475 "product_name": "Raid Volume", 00:19:15.475 "block_size": 4128, 00:19:15.475 "num_blocks": 7936, 00:19:15.475 "uuid": "69f122f3-00bf-4b1f-9c24-d1d6a078ce5c", 00:19:15.475 "md_size": 32, 00:19:15.475 "md_interleave": true, 00:19:15.475 "dif_type": 0, 00:19:15.476 "assigned_rate_limits": { 00:19:15.476 "rw_ios_per_sec": 0, 00:19:15.476 "rw_mbytes_per_sec": 0, 00:19:15.476 "r_mbytes_per_sec": 0, 00:19:15.476 "w_mbytes_per_sec": 0 00:19:15.476 }, 00:19:15.476 "claimed": false, 00:19:15.476 "zoned": false, 00:19:15.476 "supported_io_types": { 00:19:15.476 "read": true, 00:19:15.476 "write": true, 00:19:15.476 "unmap": false, 00:19:15.476 "flush": false, 00:19:15.476 "reset": true, 00:19:15.476 "nvme_admin": false, 00:19:15.476 "nvme_io": false, 00:19:15.476 "nvme_io_md": false, 00:19:15.476 "write_zeroes": true, 00:19:15.476 "zcopy": false, 00:19:15.476 "get_zone_info": false, 00:19:15.476 "zone_management": false, 00:19:15.476 "zone_append": false, 00:19:15.476 "compare": false, 00:19:15.476 "compare_and_write": false, 00:19:15.476 "abort": false, 00:19:15.476 "seek_hole": false, 00:19:15.476 "seek_data": false, 00:19:15.476 "copy": false, 00:19:15.476 "nvme_iov_md": false 00:19:15.476 }, 00:19:15.476 "memory_domains": [ 00:19:15.476 { 00:19:15.476 "dma_device_id": "system", 00:19:15.476 "dma_device_type": 1 00:19:15.476 }, 00:19:15.476 { 00:19:15.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.476 "dma_device_type": 2 00:19:15.476 }, 00:19:15.476 { 00:19:15.476 "dma_device_id": "system", 00:19:15.476 "dma_device_type": 1 00:19:15.476 }, 00:19:15.476 { 00:19:15.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.476 "dma_device_type": 2 00:19:15.476 } 00:19:15.476 ], 00:19:15.476 "driver_specific": { 00:19:15.476 "raid": { 00:19:15.476 "uuid": "69f122f3-00bf-4b1f-9c24-d1d6a078ce5c", 00:19:15.476 "strip_size_kb": 0, 00:19:15.476 "state": "online", 00:19:15.476 "raid_level": "raid1", 00:19:15.476 "superblock": true, 00:19:15.476 "num_base_bdevs": 2, 00:19:15.476 "num_base_bdevs_discovered": 2, 00:19:15.476 "num_base_bdevs_operational": 2, 00:19:15.476 "base_bdevs_list": [ 00:19:15.476 { 00:19:15.476 "name": "pt1", 00:19:15.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:15.476 "is_configured": true, 00:19:15.476 "data_offset": 256, 00:19:15.476 "data_size": 7936 00:19:15.476 }, 00:19:15.476 { 00:19:15.476 "name": "pt2", 00:19:15.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:15.476 "is_configured": true, 00:19:15.476 "data_offset": 256, 00:19:15.476 "data_size": 7936 00:19:15.476 } 00:19:15.476 ] 00:19:15.476 } 00:19:15.476 } 00:19:15.476 }' 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:15.476 pt2' 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.476 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.735 [2024-11-15 10:47:36.635445] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 69f122f3-00bf-4b1f-9c24-d1d6a078ce5c '!=' 69f122f3-00bf-4b1f-9c24-d1d6a078ce5c ']' 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.735 [2024-11-15 10:47:36.683137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.735 "name": "raid_bdev1", 00:19:15.735 "uuid": "69f122f3-00bf-4b1f-9c24-d1d6a078ce5c", 00:19:15.735 "strip_size_kb": 0, 00:19:15.735 "state": "online", 00:19:15.735 "raid_level": "raid1", 00:19:15.735 "superblock": true, 00:19:15.735 "num_base_bdevs": 2, 00:19:15.735 "num_base_bdevs_discovered": 1, 00:19:15.735 "num_base_bdevs_operational": 1, 00:19:15.735 "base_bdevs_list": [ 00:19:15.735 { 00:19:15.735 "name": null, 00:19:15.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.735 "is_configured": false, 00:19:15.735 "data_offset": 0, 00:19:15.735 "data_size": 7936 00:19:15.735 }, 00:19:15.735 { 00:19:15.735 "name": "pt2", 00:19:15.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:15.735 "is_configured": true, 00:19:15.735 "data_offset": 256, 00:19:15.735 "data_size": 7936 00:19:15.735 } 00:19:15.735 ] 00:19:15.735 }' 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.735 10:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.303 [2024-11-15 10:47:37.231304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:16.303 [2024-11-15 10:47:37.231341] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.303 [2024-11-15 10:47:37.231461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.303 [2024-11-15 10:47:37.231573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.303 [2024-11-15 10:47:37.231595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.303 [2024-11-15 10:47:37.311313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:16.303 [2024-11-15 10:47:37.311407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.303 [2024-11-15 10:47:37.311431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:16.303 [2024-11-15 10:47:37.311447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.303 [2024-11-15 10:47:37.314081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.303 [2024-11-15 10:47:37.314143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:16.303 [2024-11-15 10:47:37.314225] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:16.303 [2024-11-15 10:47:37.314286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:16.303 [2024-11-15 10:47:37.314405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:16.303 [2024-11-15 10:47:37.314427] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:16.303 [2024-11-15 10:47:37.314555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:16.303 [2024-11-15 10:47:37.314646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:16.303 [2024-11-15 10:47:37.314661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:16.303 [2024-11-15 10:47:37.314751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.303 pt2 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.303 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.304 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.304 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.304 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.304 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.304 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.304 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.304 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.304 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.304 "name": "raid_bdev1", 00:19:16.304 "uuid": "69f122f3-00bf-4b1f-9c24-d1d6a078ce5c", 00:19:16.304 "strip_size_kb": 0, 00:19:16.304 "state": "online", 00:19:16.304 "raid_level": "raid1", 00:19:16.304 "superblock": true, 00:19:16.304 "num_base_bdevs": 2, 00:19:16.304 "num_base_bdevs_discovered": 1, 00:19:16.304 "num_base_bdevs_operational": 1, 00:19:16.304 "base_bdevs_list": [ 00:19:16.304 { 00:19:16.304 "name": null, 00:19:16.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.304 "is_configured": false, 00:19:16.304 "data_offset": 256, 00:19:16.304 "data_size": 7936 00:19:16.304 }, 00:19:16.304 { 00:19:16.304 "name": "pt2", 00:19:16.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:16.304 "is_configured": true, 00:19:16.304 "data_offset": 256, 00:19:16.304 "data_size": 7936 00:19:16.304 } 00:19:16.304 ] 00:19:16.304 }' 00:19:16.304 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.304 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.870 [2024-11-15 10:47:37.895441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:16.870 [2024-11-15 10:47:37.895495] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.870 [2024-11-15 10:47:37.895631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.870 [2024-11-15 10:47:37.895709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.870 [2024-11-15 10:47:37.895725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.870 [2024-11-15 10:47:37.959477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:16.870 [2024-11-15 10:47:37.959595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.870 [2024-11-15 10:47:37.959630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:16.870 [2024-11-15 10:47:37.959645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.870 [2024-11-15 10:47:37.962258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.870 [2024-11-15 10:47:37.962314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:16.870 [2024-11-15 10:47:37.962397] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:16.870 [2024-11-15 10:47:37.962451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:16.870 [2024-11-15 10:47:37.962638] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:16.870 [2024-11-15 10:47:37.962657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:16.870 [2024-11-15 10:47:37.962680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:16.870 [2024-11-15 10:47:37.962748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:16.870 [2024-11-15 10:47:37.962845] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:16.870 [2024-11-15 10:47:37.962861] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:16.870 [2024-11-15 10:47:37.962985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:16.870 [2024-11-15 10:47:37.963096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:16.870 [2024-11-15 10:47:37.963114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:16.870 [2024-11-15 10:47:37.963203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.870 pt1 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.870 10:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.870 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.870 "name": "raid_bdev1", 00:19:16.870 "uuid": "69f122f3-00bf-4b1f-9c24-d1d6a078ce5c", 00:19:16.870 "strip_size_kb": 0, 00:19:16.870 "state": "online", 00:19:16.870 "raid_level": "raid1", 00:19:16.870 "superblock": true, 00:19:16.870 "num_base_bdevs": 2, 00:19:16.870 "num_base_bdevs_discovered": 1, 00:19:16.870 "num_base_bdevs_operational": 1, 00:19:16.870 "base_bdevs_list": [ 00:19:16.870 { 00:19:16.870 "name": null, 00:19:16.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.870 "is_configured": false, 00:19:16.870 "data_offset": 256, 00:19:16.870 "data_size": 7936 00:19:16.870 }, 00:19:16.870 { 00:19:16.870 "name": "pt2", 00:19:16.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:16.870 "is_configured": true, 00:19:16.870 "data_offset": 256, 00:19:16.870 "data_size": 7936 00:19:16.870 } 00:19:16.870 ] 00:19:16.870 }' 00:19:16.871 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.871 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.438 [2024-11-15 10:47:38.552011] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 69f122f3-00bf-4b1f-9c24-d1d6a078ce5c '!=' 69f122f3-00bf-4b1f-9c24-d1d6a078ce5c ']' 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89126 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89126 ']' 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89126 00:19:17.438 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:17.696 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.696 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89126 00:19:17.696 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.696 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.696 killing process with pid 89126 00:19:17.696 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89126' 00:19:17.696 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89126 00:19:17.696 [2024-11-15 10:47:38.628583] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:17.696 [2024-11-15 10:47:38.628724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.696 10:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89126 00:19:17.696 [2024-11-15 10:47:38.628811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:17.696 [2024-11-15 10:47:38.628836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:17.696 [2024-11-15 10:47:38.821380] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:19.141 10:47:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:19.141 00:19:19.141 real 0m6.745s 00:19:19.141 user 0m10.717s 00:19:19.141 sys 0m0.946s 00:19:19.141 10:47:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.141 10:47:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.141 ************************************ 00:19:19.141 END TEST raid_superblock_test_md_interleaved 00:19:19.141 ************************************ 00:19:19.141 10:47:39 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:19.141 10:47:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:19.141 10:47:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.141 10:47:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.141 ************************************ 00:19:19.141 START TEST raid_rebuild_test_sb_md_interleaved 00:19:19.141 ************************************ 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89449 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89449 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89449 ']' 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.141 10:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.141 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:19.141 Zero copy mechanism will not be used. 00:19:19.141 [2024-11-15 10:47:40.063271] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:19:19.141 [2024-11-15 10:47:40.063448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89449 ] 00:19:19.141 [2024-11-15 10:47:40.240641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.399 [2024-11-15 10:47:40.374825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.658 [2024-11-15 10:47:40.589697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.658 [2024-11-15 10:47:40.589773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.917 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.917 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:19.917 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:19.917 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:19.918 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.918 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.918 BaseBdev1_malloc 00:19:19.918 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.918 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:19.918 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.918 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.918 [2024-11-15 10:47:41.075616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:19.918 [2024-11-15 10:47:41.075690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.918 [2024-11-15 10:47:41.075725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:19.918 [2024-11-15 10:47:41.075746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.177 [2024-11-15 10:47:41.078377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.177 [2024-11-15 10:47:41.078462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:20.177 BaseBdev1 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.177 BaseBdev2_malloc 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.177 [2024-11-15 10:47:41.133017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:20.177 [2024-11-15 10:47:41.133290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.177 [2024-11-15 10:47:41.133333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:20.177 [2024-11-15 10:47:41.133356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.177 [2024-11-15 10:47:41.135956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.177 [2024-11-15 10:47:41.136020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:20.177 BaseBdev2 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.177 spare_malloc 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.177 spare_delay 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.177 [2024-11-15 10:47:41.205689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:20.177 [2024-11-15 10:47:41.205765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.177 [2024-11-15 10:47:41.205798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:20.177 [2024-11-15 10:47:41.205817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.177 [2024-11-15 10:47:41.208378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.177 [2024-11-15 10:47:41.208442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:20.177 spare 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.177 [2024-11-15 10:47:41.213765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.177 [2024-11-15 10:47:41.216397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.177 [2024-11-15 10:47:41.216868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:20.177 [2024-11-15 10:47:41.217017] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:20.177 [2024-11-15 10:47:41.217170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:20.177 [2024-11-15 10:47:41.217385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:20.177 [2024-11-15 10:47:41.217522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:20.177 [2024-11-15 10:47:41.217808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.177 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.177 "name": "raid_bdev1", 00:19:20.177 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:20.177 "strip_size_kb": 0, 00:19:20.177 "state": "online", 00:19:20.177 "raid_level": "raid1", 00:19:20.177 "superblock": true, 00:19:20.177 "num_base_bdevs": 2, 00:19:20.177 "num_base_bdevs_discovered": 2, 00:19:20.177 "num_base_bdevs_operational": 2, 00:19:20.177 "base_bdevs_list": [ 00:19:20.177 { 00:19:20.177 "name": "BaseBdev1", 00:19:20.177 "uuid": "d66d670f-bc69-514b-8b42-bd3ebe10394d", 00:19:20.177 "is_configured": true, 00:19:20.178 "data_offset": 256, 00:19:20.178 "data_size": 7936 00:19:20.178 }, 00:19:20.178 { 00:19:20.178 "name": "BaseBdev2", 00:19:20.178 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:20.178 "is_configured": true, 00:19:20.178 "data_offset": 256, 00:19:20.178 "data_size": 7936 00:19:20.178 } 00:19:20.178 ] 00:19:20.178 }' 00:19:20.178 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.178 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:20.745 [2024-11-15 10:47:41.742420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.745 [2024-11-15 10:47:41.830054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.745 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.746 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.746 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.746 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.746 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.746 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.746 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.746 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.746 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.746 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.746 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.746 "name": "raid_bdev1", 00:19:20.746 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:20.746 "strip_size_kb": 0, 00:19:20.746 "state": "online", 00:19:20.746 "raid_level": "raid1", 00:19:20.746 "superblock": true, 00:19:20.746 "num_base_bdevs": 2, 00:19:20.746 "num_base_bdevs_discovered": 1, 00:19:20.746 "num_base_bdevs_operational": 1, 00:19:20.746 "base_bdevs_list": [ 00:19:20.746 { 00:19:20.746 "name": null, 00:19:20.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.746 "is_configured": false, 00:19:20.746 "data_offset": 0, 00:19:20.746 "data_size": 7936 00:19:20.746 }, 00:19:20.746 { 00:19:20.746 "name": "BaseBdev2", 00:19:20.746 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:20.746 "is_configured": true, 00:19:20.746 "data_offset": 256, 00:19:20.746 "data_size": 7936 00:19:20.746 } 00:19:20.746 ] 00:19:20.746 }' 00:19:20.746 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.746 10:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.313 10:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:21.313 10:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.313 10:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.313 [2024-11-15 10:47:42.302278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.313 [2024-11-15 10:47:42.319000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:21.313 10:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.313 10:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:21.313 [2024-11-15 10:47:42.321716] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:22.249 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.249 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.249 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.249 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.249 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.249 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.249 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.249 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.249 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.250 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.250 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.250 "name": "raid_bdev1", 00:19:22.250 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:22.250 "strip_size_kb": 0, 00:19:22.250 "state": "online", 00:19:22.250 "raid_level": "raid1", 00:19:22.250 "superblock": true, 00:19:22.250 "num_base_bdevs": 2, 00:19:22.250 "num_base_bdevs_discovered": 2, 00:19:22.250 "num_base_bdevs_operational": 2, 00:19:22.250 "process": { 00:19:22.250 "type": "rebuild", 00:19:22.250 "target": "spare", 00:19:22.250 "progress": { 00:19:22.250 "blocks": 2560, 00:19:22.250 "percent": 32 00:19:22.250 } 00:19:22.250 }, 00:19:22.250 "base_bdevs_list": [ 00:19:22.250 { 00:19:22.250 "name": "spare", 00:19:22.250 "uuid": "24dfa7b9-9304-5317-9f74-fc2196d7a88b", 00:19:22.250 "is_configured": true, 00:19:22.250 "data_offset": 256, 00:19:22.250 "data_size": 7936 00:19:22.250 }, 00:19:22.250 { 00:19:22.250 "name": "BaseBdev2", 00:19:22.250 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:22.250 "is_configured": true, 00:19:22.250 "data_offset": 256, 00:19:22.250 "data_size": 7936 00:19:22.250 } 00:19:22.250 ] 00:19:22.250 }' 00:19:22.250 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.508 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.508 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.508 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.508 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:22.508 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.508 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.508 [2024-11-15 10:47:43.490995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.508 [2024-11-15 10:47:43.530921] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:22.508 [2024-11-15 10:47:43.531187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.508 [2024-11-15 10:47:43.531446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.508 [2024-11-15 10:47:43.531557] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:22.508 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.508 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.508 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.508 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.508 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.508 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.508 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.509 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.509 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.509 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.509 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.509 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.509 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.509 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.509 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.509 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.509 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.509 "name": "raid_bdev1", 00:19:22.509 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:22.509 "strip_size_kb": 0, 00:19:22.509 "state": "online", 00:19:22.509 "raid_level": "raid1", 00:19:22.509 "superblock": true, 00:19:22.509 "num_base_bdevs": 2, 00:19:22.509 "num_base_bdevs_discovered": 1, 00:19:22.509 "num_base_bdevs_operational": 1, 00:19:22.509 "base_bdevs_list": [ 00:19:22.509 { 00:19:22.509 "name": null, 00:19:22.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.509 "is_configured": false, 00:19:22.509 "data_offset": 0, 00:19:22.509 "data_size": 7936 00:19:22.509 }, 00:19:22.509 { 00:19:22.509 "name": "BaseBdev2", 00:19:22.509 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:22.509 "is_configured": true, 00:19:22.509 "data_offset": 256, 00:19:22.509 "data_size": 7936 00:19:22.509 } 00:19:22.509 ] 00:19:22.509 }' 00:19:22.509 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.509 10:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.076 "name": "raid_bdev1", 00:19:23.076 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:23.076 "strip_size_kb": 0, 00:19:23.076 "state": "online", 00:19:23.076 "raid_level": "raid1", 00:19:23.076 "superblock": true, 00:19:23.076 "num_base_bdevs": 2, 00:19:23.076 "num_base_bdevs_discovered": 1, 00:19:23.076 "num_base_bdevs_operational": 1, 00:19:23.076 "base_bdevs_list": [ 00:19:23.076 { 00:19:23.076 "name": null, 00:19:23.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.076 "is_configured": false, 00:19:23.076 "data_offset": 0, 00:19:23.076 "data_size": 7936 00:19:23.076 }, 00:19:23.076 { 00:19:23.076 "name": "BaseBdev2", 00:19:23.076 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:23.076 "is_configured": true, 00:19:23.076 "data_offset": 256, 00:19:23.076 "data_size": 7936 00:19:23.076 } 00:19:23.076 ] 00:19:23.076 }' 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.076 [2024-11-15 10:47:44.211952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:23.076 [2024-11-15 10:47:44.228484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.076 10:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:23.076 [2024-11-15 10:47:44.231189] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.450 "name": "raid_bdev1", 00:19:24.450 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:24.450 "strip_size_kb": 0, 00:19:24.450 "state": "online", 00:19:24.450 "raid_level": "raid1", 00:19:24.450 "superblock": true, 00:19:24.450 "num_base_bdevs": 2, 00:19:24.450 "num_base_bdevs_discovered": 2, 00:19:24.450 "num_base_bdevs_operational": 2, 00:19:24.450 "process": { 00:19:24.450 "type": "rebuild", 00:19:24.450 "target": "spare", 00:19:24.450 "progress": { 00:19:24.450 "blocks": 2560, 00:19:24.450 "percent": 32 00:19:24.450 } 00:19:24.450 }, 00:19:24.450 "base_bdevs_list": [ 00:19:24.450 { 00:19:24.450 "name": "spare", 00:19:24.450 "uuid": "24dfa7b9-9304-5317-9f74-fc2196d7a88b", 00:19:24.450 "is_configured": true, 00:19:24.450 "data_offset": 256, 00:19:24.450 "data_size": 7936 00:19:24.450 }, 00:19:24.450 { 00:19:24.450 "name": "BaseBdev2", 00:19:24.450 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:24.450 "is_configured": true, 00:19:24.450 "data_offset": 256, 00:19:24.450 "data_size": 7936 00:19:24.450 } 00:19:24.450 ] 00:19:24.450 }' 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:24.450 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=794 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.450 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.450 "name": "raid_bdev1", 00:19:24.450 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:24.450 "strip_size_kb": 0, 00:19:24.450 "state": "online", 00:19:24.450 "raid_level": "raid1", 00:19:24.450 "superblock": true, 00:19:24.450 "num_base_bdevs": 2, 00:19:24.450 "num_base_bdevs_discovered": 2, 00:19:24.450 "num_base_bdevs_operational": 2, 00:19:24.450 "process": { 00:19:24.450 "type": "rebuild", 00:19:24.450 "target": "spare", 00:19:24.450 "progress": { 00:19:24.450 "blocks": 2816, 00:19:24.450 "percent": 35 00:19:24.450 } 00:19:24.450 }, 00:19:24.450 "base_bdevs_list": [ 00:19:24.450 { 00:19:24.450 "name": "spare", 00:19:24.450 "uuid": "24dfa7b9-9304-5317-9f74-fc2196d7a88b", 00:19:24.450 "is_configured": true, 00:19:24.450 "data_offset": 256, 00:19:24.450 "data_size": 7936 00:19:24.450 }, 00:19:24.450 { 00:19:24.450 "name": "BaseBdev2", 00:19:24.451 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:24.451 "is_configured": true, 00:19:24.451 "data_offset": 256, 00:19:24.451 "data_size": 7936 00:19:24.451 } 00:19:24.451 ] 00:19:24.451 }' 00:19:24.451 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.451 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.451 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.451 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.451 10:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:25.825 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:25.825 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.825 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.825 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:25.825 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:25.825 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.825 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.825 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.825 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.825 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.825 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.825 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.825 "name": "raid_bdev1", 00:19:25.825 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:25.825 "strip_size_kb": 0, 00:19:25.825 "state": "online", 00:19:25.825 "raid_level": "raid1", 00:19:25.825 "superblock": true, 00:19:25.825 "num_base_bdevs": 2, 00:19:25.825 "num_base_bdevs_discovered": 2, 00:19:25.825 "num_base_bdevs_operational": 2, 00:19:25.825 "process": { 00:19:25.825 "type": "rebuild", 00:19:25.825 "target": "spare", 00:19:25.825 "progress": { 00:19:25.825 "blocks": 5888, 00:19:25.825 "percent": 74 00:19:25.825 } 00:19:25.825 }, 00:19:25.825 "base_bdevs_list": [ 00:19:25.825 { 00:19:25.825 "name": "spare", 00:19:25.825 "uuid": "24dfa7b9-9304-5317-9f74-fc2196d7a88b", 00:19:25.825 "is_configured": true, 00:19:25.825 "data_offset": 256, 00:19:25.825 "data_size": 7936 00:19:25.825 }, 00:19:25.826 { 00:19:25.826 "name": "BaseBdev2", 00:19:25.826 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:25.826 "is_configured": true, 00:19:25.826 "data_offset": 256, 00:19:25.826 "data_size": 7936 00:19:25.826 } 00:19:25.826 ] 00:19:25.826 }' 00:19:25.826 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.826 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.826 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.826 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.826 10:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:26.392 [2024-11-15 10:47:47.354871] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:26.392 [2024-11-15 10:47:47.355185] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:26.392 [2024-11-15 10:47:47.355359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.651 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:26.651 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:26.651 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.651 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:26.651 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:26.651 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.651 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.651 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.651 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.651 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.651 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.651 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.651 "name": "raid_bdev1", 00:19:26.651 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:26.651 "strip_size_kb": 0, 00:19:26.651 "state": "online", 00:19:26.651 "raid_level": "raid1", 00:19:26.651 "superblock": true, 00:19:26.651 "num_base_bdevs": 2, 00:19:26.651 "num_base_bdevs_discovered": 2, 00:19:26.651 "num_base_bdevs_operational": 2, 00:19:26.651 "base_bdevs_list": [ 00:19:26.651 { 00:19:26.651 "name": "spare", 00:19:26.651 "uuid": "24dfa7b9-9304-5317-9f74-fc2196d7a88b", 00:19:26.651 "is_configured": true, 00:19:26.651 "data_offset": 256, 00:19:26.651 "data_size": 7936 00:19:26.651 }, 00:19:26.651 { 00:19:26.651 "name": "BaseBdev2", 00:19:26.651 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:26.651 "is_configured": true, 00:19:26.651 "data_offset": 256, 00:19:26.651 "data_size": 7936 00:19:26.651 } 00:19:26.651 ] 00:19:26.651 }' 00:19:26.651 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.909 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:26.909 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.909 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.910 "name": "raid_bdev1", 00:19:26.910 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:26.910 "strip_size_kb": 0, 00:19:26.910 "state": "online", 00:19:26.910 "raid_level": "raid1", 00:19:26.910 "superblock": true, 00:19:26.910 "num_base_bdevs": 2, 00:19:26.910 "num_base_bdevs_discovered": 2, 00:19:26.910 "num_base_bdevs_operational": 2, 00:19:26.910 "base_bdevs_list": [ 00:19:26.910 { 00:19:26.910 "name": "spare", 00:19:26.910 "uuid": "24dfa7b9-9304-5317-9f74-fc2196d7a88b", 00:19:26.910 "is_configured": true, 00:19:26.910 "data_offset": 256, 00:19:26.910 "data_size": 7936 00:19:26.910 }, 00:19:26.910 { 00:19:26.910 "name": "BaseBdev2", 00:19:26.910 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:26.910 "is_configured": true, 00:19:26.910 "data_offset": 256, 00:19:26.910 "data_size": 7936 00:19:26.910 } 00:19:26.910 ] 00:19:26.910 }' 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:26.910 10:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.910 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.169 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.169 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.169 "name": "raid_bdev1", 00:19:27.169 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:27.169 "strip_size_kb": 0, 00:19:27.169 "state": "online", 00:19:27.169 "raid_level": "raid1", 00:19:27.169 "superblock": true, 00:19:27.169 "num_base_bdevs": 2, 00:19:27.169 "num_base_bdevs_discovered": 2, 00:19:27.169 "num_base_bdevs_operational": 2, 00:19:27.169 "base_bdevs_list": [ 00:19:27.169 { 00:19:27.169 "name": "spare", 00:19:27.169 "uuid": "24dfa7b9-9304-5317-9f74-fc2196d7a88b", 00:19:27.169 "is_configured": true, 00:19:27.169 "data_offset": 256, 00:19:27.169 "data_size": 7936 00:19:27.169 }, 00:19:27.169 { 00:19:27.169 "name": "BaseBdev2", 00:19:27.169 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:27.169 "is_configured": true, 00:19:27.169 "data_offset": 256, 00:19:27.169 "data_size": 7936 00:19:27.169 } 00:19:27.169 ] 00:19:27.169 }' 00:19:27.169 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.169 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.764 [2024-11-15 10:47:48.605311] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:27.764 [2024-11-15 10:47:48.605563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:27.764 [2024-11-15 10:47:48.605869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:27.764 [2024-11-15 10:47:48.606032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:27.764 [2024-11-15 10:47:48.606063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.764 [2024-11-15 10:47:48.673233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:27.764 [2024-11-15 10:47:48.673313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.764 [2024-11-15 10:47:48.673347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:27.764 [2024-11-15 10:47:48.673362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.764 [2024-11-15 10:47:48.675932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.764 [2024-11-15 10:47:48.675976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:27.764 [2024-11-15 10:47:48.676072] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:27.764 [2024-11-15 10:47:48.676144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:27.764 [2024-11-15 10:47:48.676292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:27.764 spare 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:27.764 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.765 [2024-11-15 10:47:48.776417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:27.765 [2024-11-15 10:47:48.776469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:27.765 [2024-11-15 10:47:48.776656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:27.765 [2024-11-15 10:47:48.776814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:27.765 [2024-11-15 10:47:48.776831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:27.765 [2024-11-15 10:47:48.776963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.765 "name": "raid_bdev1", 00:19:27.765 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:27.765 "strip_size_kb": 0, 00:19:27.765 "state": "online", 00:19:27.765 "raid_level": "raid1", 00:19:27.765 "superblock": true, 00:19:27.765 "num_base_bdevs": 2, 00:19:27.765 "num_base_bdevs_discovered": 2, 00:19:27.765 "num_base_bdevs_operational": 2, 00:19:27.765 "base_bdevs_list": [ 00:19:27.765 { 00:19:27.765 "name": "spare", 00:19:27.765 "uuid": "24dfa7b9-9304-5317-9f74-fc2196d7a88b", 00:19:27.765 "is_configured": true, 00:19:27.765 "data_offset": 256, 00:19:27.765 "data_size": 7936 00:19:27.765 }, 00:19:27.765 { 00:19:27.765 "name": "BaseBdev2", 00:19:27.765 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:27.765 "is_configured": true, 00:19:27.765 "data_offset": 256, 00:19:27.765 "data_size": 7936 00:19:27.765 } 00:19:27.765 ] 00:19:27.765 }' 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.765 10:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.332 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:28.332 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.332 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:28.332 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:28.332 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.332 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.332 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.332 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.332 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.332 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.332 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.332 "name": "raid_bdev1", 00:19:28.332 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:28.332 "strip_size_kb": 0, 00:19:28.332 "state": "online", 00:19:28.332 "raid_level": "raid1", 00:19:28.332 "superblock": true, 00:19:28.332 "num_base_bdevs": 2, 00:19:28.332 "num_base_bdevs_discovered": 2, 00:19:28.332 "num_base_bdevs_operational": 2, 00:19:28.332 "base_bdevs_list": [ 00:19:28.332 { 00:19:28.332 "name": "spare", 00:19:28.332 "uuid": "24dfa7b9-9304-5317-9f74-fc2196d7a88b", 00:19:28.332 "is_configured": true, 00:19:28.332 "data_offset": 256, 00:19:28.332 "data_size": 7936 00:19:28.332 }, 00:19:28.332 { 00:19:28.332 "name": "BaseBdev2", 00:19:28.332 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:28.332 "is_configured": true, 00:19:28.332 "data_offset": 256, 00:19:28.332 "data_size": 7936 00:19:28.332 } 00:19:28.332 ] 00:19:28.333 }' 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.333 [2024-11-15 10:47:49.477630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.333 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.591 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.591 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.591 "name": "raid_bdev1", 00:19:28.591 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:28.591 "strip_size_kb": 0, 00:19:28.591 "state": "online", 00:19:28.591 "raid_level": "raid1", 00:19:28.591 "superblock": true, 00:19:28.591 "num_base_bdevs": 2, 00:19:28.591 "num_base_bdevs_discovered": 1, 00:19:28.591 "num_base_bdevs_operational": 1, 00:19:28.591 "base_bdevs_list": [ 00:19:28.591 { 00:19:28.591 "name": null, 00:19:28.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.591 "is_configured": false, 00:19:28.591 "data_offset": 0, 00:19:28.591 "data_size": 7936 00:19:28.591 }, 00:19:28.591 { 00:19:28.591 "name": "BaseBdev2", 00:19:28.591 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:28.591 "is_configured": true, 00:19:28.591 "data_offset": 256, 00:19:28.591 "data_size": 7936 00:19:28.591 } 00:19:28.591 ] 00:19:28.591 }' 00:19:28.591 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.591 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.850 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:28.850 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.850 10:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.850 [2024-11-15 10:47:50.001771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:28.850 [2024-11-15 10:47:50.002031] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:28.850 [2024-11-15 10:47:50.002060] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:28.850 [2024-11-15 10:47:50.002129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:29.109 [2024-11-15 10:47:50.019145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:29.109 10:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.109 10:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:29.109 [2024-11-15 10:47:50.021908] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.045 "name": "raid_bdev1", 00:19:30.045 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:30.045 "strip_size_kb": 0, 00:19:30.045 "state": "online", 00:19:30.045 "raid_level": "raid1", 00:19:30.045 "superblock": true, 00:19:30.045 "num_base_bdevs": 2, 00:19:30.045 "num_base_bdevs_discovered": 2, 00:19:30.045 "num_base_bdevs_operational": 2, 00:19:30.045 "process": { 00:19:30.045 "type": "rebuild", 00:19:30.045 "target": "spare", 00:19:30.045 "progress": { 00:19:30.045 "blocks": 2560, 00:19:30.045 "percent": 32 00:19:30.045 } 00:19:30.045 }, 00:19:30.045 "base_bdevs_list": [ 00:19:30.045 { 00:19:30.045 "name": "spare", 00:19:30.045 "uuid": "24dfa7b9-9304-5317-9f74-fc2196d7a88b", 00:19:30.045 "is_configured": true, 00:19:30.045 "data_offset": 256, 00:19:30.045 "data_size": 7936 00:19:30.045 }, 00:19:30.045 { 00:19:30.045 "name": "BaseBdev2", 00:19:30.045 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:30.045 "is_configured": true, 00:19:30.045 "data_offset": 256, 00:19:30.045 "data_size": 7936 00:19:30.045 } 00:19:30.045 ] 00:19:30.045 }' 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.045 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.045 [2024-11-15 10:47:51.194885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:30.304 [2024-11-15 10:47:51.230805] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:30.304 [2024-11-15 10:47:51.231077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.304 [2024-11-15 10:47:51.231107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:30.304 [2024-11-15 10:47:51.231123] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.304 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.304 "name": "raid_bdev1", 00:19:30.304 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:30.304 "strip_size_kb": 0, 00:19:30.304 "state": "online", 00:19:30.304 "raid_level": "raid1", 00:19:30.304 "superblock": true, 00:19:30.304 "num_base_bdevs": 2, 00:19:30.304 "num_base_bdevs_discovered": 1, 00:19:30.304 "num_base_bdevs_operational": 1, 00:19:30.304 "base_bdevs_list": [ 00:19:30.304 { 00:19:30.304 "name": null, 00:19:30.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.305 "is_configured": false, 00:19:30.305 "data_offset": 0, 00:19:30.305 "data_size": 7936 00:19:30.305 }, 00:19:30.305 { 00:19:30.305 "name": "BaseBdev2", 00:19:30.305 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:30.305 "is_configured": true, 00:19:30.305 "data_offset": 256, 00:19:30.305 "data_size": 7936 00:19:30.305 } 00:19:30.305 ] 00:19:30.305 }' 00:19:30.305 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.305 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.907 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:30.907 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.907 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.907 [2024-11-15 10:47:51.764117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:30.907 [2024-11-15 10:47:51.764331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.907 [2024-11-15 10:47:51.764380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:30.907 [2024-11-15 10:47:51.764401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.907 [2024-11-15 10:47:51.764715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.907 [2024-11-15 10:47:51.764762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:30.907 [2024-11-15 10:47:51.764859] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:30.907 [2024-11-15 10:47:51.764891] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:30.907 [2024-11-15 10:47:51.764906] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:30.907 [2024-11-15 10:47:51.764953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:30.907 [2024-11-15 10:47:51.780986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:30.907 spare 00:19:30.907 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.907 10:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:30.907 [2024-11-15 10:47:51.783525] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.845 "name": "raid_bdev1", 00:19:31.845 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:31.845 "strip_size_kb": 0, 00:19:31.845 "state": "online", 00:19:31.845 "raid_level": "raid1", 00:19:31.845 "superblock": true, 00:19:31.845 "num_base_bdevs": 2, 00:19:31.845 "num_base_bdevs_discovered": 2, 00:19:31.845 "num_base_bdevs_operational": 2, 00:19:31.845 "process": { 00:19:31.845 "type": "rebuild", 00:19:31.845 "target": "spare", 00:19:31.845 "progress": { 00:19:31.845 "blocks": 2560, 00:19:31.845 "percent": 32 00:19:31.845 } 00:19:31.845 }, 00:19:31.845 "base_bdevs_list": [ 00:19:31.845 { 00:19:31.845 "name": "spare", 00:19:31.845 "uuid": "24dfa7b9-9304-5317-9f74-fc2196d7a88b", 00:19:31.845 "is_configured": true, 00:19:31.845 "data_offset": 256, 00:19:31.845 "data_size": 7936 00:19:31.845 }, 00:19:31.845 { 00:19:31.845 "name": "BaseBdev2", 00:19:31.845 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:31.845 "is_configured": true, 00:19:31.845 "data_offset": 256, 00:19:31.845 "data_size": 7936 00:19:31.845 } 00:19:31.845 ] 00:19:31.845 }' 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.845 10:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.845 [2024-11-15 10:47:52.953228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.845 [2024-11-15 10:47:52.993183] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:31.845 [2024-11-15 10:47:52.993489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.845 [2024-11-15 10:47:52.993753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.845 [2024-11-15 10:47:52.993778] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.104 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.104 "name": "raid_bdev1", 00:19:32.104 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:32.104 "strip_size_kb": 0, 00:19:32.104 "state": "online", 00:19:32.104 "raid_level": "raid1", 00:19:32.104 "superblock": true, 00:19:32.105 "num_base_bdevs": 2, 00:19:32.105 "num_base_bdevs_discovered": 1, 00:19:32.105 "num_base_bdevs_operational": 1, 00:19:32.105 "base_bdevs_list": [ 00:19:32.105 { 00:19:32.105 "name": null, 00:19:32.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.105 "is_configured": false, 00:19:32.105 "data_offset": 0, 00:19:32.105 "data_size": 7936 00:19:32.105 }, 00:19:32.105 { 00:19:32.105 "name": "BaseBdev2", 00:19:32.105 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:32.105 "is_configured": true, 00:19:32.105 "data_offset": 256, 00:19:32.105 "data_size": 7936 00:19:32.105 } 00:19:32.105 ] 00:19:32.105 }' 00:19:32.105 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.105 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.673 "name": "raid_bdev1", 00:19:32.673 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:32.673 "strip_size_kb": 0, 00:19:32.673 "state": "online", 00:19:32.673 "raid_level": "raid1", 00:19:32.673 "superblock": true, 00:19:32.673 "num_base_bdevs": 2, 00:19:32.673 "num_base_bdevs_discovered": 1, 00:19:32.673 "num_base_bdevs_operational": 1, 00:19:32.673 "base_bdevs_list": [ 00:19:32.673 { 00:19:32.673 "name": null, 00:19:32.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.673 "is_configured": false, 00:19:32.673 "data_offset": 0, 00:19:32.673 "data_size": 7936 00:19:32.673 }, 00:19:32.673 { 00:19:32.673 "name": "BaseBdev2", 00:19:32.673 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:32.673 "is_configured": true, 00:19:32.673 "data_offset": 256, 00:19:32.673 "data_size": 7936 00:19:32.673 } 00:19:32.673 ] 00:19:32.673 }' 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.673 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.674 [2024-11-15 10:47:53.701783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:32.674 [2024-11-15 10:47:53.701852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.674 [2024-11-15 10:47:53.701904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:32.674 [2024-11-15 10:47:53.701934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.674 [2024-11-15 10:47:53.702138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.674 [2024-11-15 10:47:53.702159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:32.674 [2024-11-15 10:47:53.702227] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:32.674 [2024-11-15 10:47:53.702246] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:32.674 [2024-11-15 10:47:53.702258] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:32.674 [2024-11-15 10:47:53.702271] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:32.674 BaseBdev1 00:19:32.674 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.674 10:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.609 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.609 "name": "raid_bdev1", 00:19:33.609 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:33.609 "strip_size_kb": 0, 00:19:33.609 "state": "online", 00:19:33.609 "raid_level": "raid1", 00:19:33.609 "superblock": true, 00:19:33.609 "num_base_bdevs": 2, 00:19:33.609 "num_base_bdevs_discovered": 1, 00:19:33.609 "num_base_bdevs_operational": 1, 00:19:33.609 "base_bdevs_list": [ 00:19:33.609 { 00:19:33.609 "name": null, 00:19:33.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.609 "is_configured": false, 00:19:33.609 "data_offset": 0, 00:19:33.610 "data_size": 7936 00:19:33.610 }, 00:19:33.610 { 00:19:33.610 "name": "BaseBdev2", 00:19:33.610 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:33.610 "is_configured": true, 00:19:33.610 "data_offset": 256, 00:19:33.610 "data_size": 7936 00:19:33.610 } 00:19:33.610 ] 00:19:33.610 }' 00:19:33.610 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.610 10:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.179 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:34.179 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.179 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:34.180 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:34.180 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.180 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.180 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.180 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.180 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.180 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.180 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.180 "name": "raid_bdev1", 00:19:34.180 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:34.180 "strip_size_kb": 0, 00:19:34.180 "state": "online", 00:19:34.180 "raid_level": "raid1", 00:19:34.180 "superblock": true, 00:19:34.180 "num_base_bdevs": 2, 00:19:34.180 "num_base_bdevs_discovered": 1, 00:19:34.180 "num_base_bdevs_operational": 1, 00:19:34.180 "base_bdevs_list": [ 00:19:34.180 { 00:19:34.180 "name": null, 00:19:34.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.180 "is_configured": false, 00:19:34.180 "data_offset": 0, 00:19:34.180 "data_size": 7936 00:19:34.180 }, 00:19:34.180 { 00:19:34.180 "name": "BaseBdev2", 00:19:34.180 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:34.180 "is_configured": true, 00:19:34.180 "data_offset": 256, 00:19:34.180 "data_size": 7936 00:19:34.180 } 00:19:34.180 ] 00:19:34.180 }' 00:19:34.180 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.180 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:34.180 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.441 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.442 [2024-11-15 10:47:55.374328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:34.442 [2024-11-15 10:47:55.374535] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:34.442 [2024-11-15 10:47:55.374590] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:34.442 request: 00:19:34.442 { 00:19:34.442 "base_bdev": "BaseBdev1", 00:19:34.442 "raid_bdev": "raid_bdev1", 00:19:34.442 "method": "bdev_raid_add_base_bdev", 00:19:34.442 "req_id": 1 00:19:34.442 } 00:19:34.442 Got JSON-RPC error response 00:19:34.442 response: 00:19:34.442 { 00:19:34.442 "code": -22, 00:19:34.442 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:34.442 } 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:34.442 10:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.378 "name": "raid_bdev1", 00:19:35.378 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:35.378 "strip_size_kb": 0, 00:19:35.378 "state": "online", 00:19:35.378 "raid_level": "raid1", 00:19:35.378 "superblock": true, 00:19:35.378 "num_base_bdevs": 2, 00:19:35.378 "num_base_bdevs_discovered": 1, 00:19:35.378 "num_base_bdevs_operational": 1, 00:19:35.378 "base_bdevs_list": [ 00:19:35.378 { 00:19:35.378 "name": null, 00:19:35.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.378 "is_configured": false, 00:19:35.378 "data_offset": 0, 00:19:35.378 "data_size": 7936 00:19:35.378 }, 00:19:35.378 { 00:19:35.378 "name": "BaseBdev2", 00:19:35.378 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:35.378 "is_configured": true, 00:19:35.378 "data_offset": 256, 00:19:35.378 "data_size": 7936 00:19:35.378 } 00:19:35.378 ] 00:19:35.378 }' 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.378 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.945 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:35.945 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.945 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:35.945 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:35.945 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.945 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.945 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.945 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.945 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.945 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.945 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.945 "name": "raid_bdev1", 00:19:35.945 "uuid": "72bb65a6-9c09-4721-9c81-9d4afae2ca71", 00:19:35.945 "strip_size_kb": 0, 00:19:35.945 "state": "online", 00:19:35.945 "raid_level": "raid1", 00:19:35.946 "superblock": true, 00:19:35.946 "num_base_bdevs": 2, 00:19:35.946 "num_base_bdevs_discovered": 1, 00:19:35.946 "num_base_bdevs_operational": 1, 00:19:35.946 "base_bdevs_list": [ 00:19:35.946 { 00:19:35.946 "name": null, 00:19:35.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.946 "is_configured": false, 00:19:35.946 "data_offset": 0, 00:19:35.946 "data_size": 7936 00:19:35.946 }, 00:19:35.946 { 00:19:35.946 "name": "BaseBdev2", 00:19:35.946 "uuid": "609c54be-c25a-5a0a-8450-62503312ac38", 00:19:35.946 "is_configured": true, 00:19:35.946 "data_offset": 256, 00:19:35.946 "data_size": 7936 00:19:35.946 } 00:19:35.946 ] 00:19:35.946 }' 00:19:35.946 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.946 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:35.946 10:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.946 10:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:35.946 10:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89449 00:19:35.946 10:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89449 ']' 00:19:35.946 10:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89449 00:19:35.946 10:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:35.946 10:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.946 10:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89449 00:19:35.946 killing process with pid 89449 00:19:35.946 Received shutdown signal, test time was about 60.000000 seconds 00:19:35.946 00:19:35.946 Latency(us) 00:19:35.946 [2024-11-15T10:47:57.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.946 [2024-11-15T10:47:57.108Z] =================================================================================================================== 00:19:35.946 [2024-11-15T10:47:57.108Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:35.946 10:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.946 10:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.946 10:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89449' 00:19:35.946 10:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89449 00:19:35.946 [2024-11-15 10:47:57.077646] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:35.946 10:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89449 00:19:35.946 [2024-11-15 10:47:57.077831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:35.946 [2024-11-15 10:47:57.077935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:35.946 [2024-11-15 10:47:57.077954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:36.205 [2024-11-15 10:47:57.333755] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:37.581 10:47:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:37.581 00:19:37.581 real 0m18.388s 00:19:37.581 user 0m25.020s 00:19:37.581 sys 0m1.427s 00:19:37.581 10:47:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.581 ************************************ 00:19:37.581 END TEST raid_rebuild_test_sb_md_interleaved 00:19:37.581 ************************************ 00:19:37.581 10:47:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.581 10:47:58 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:37.581 10:47:58 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:37.581 10:47:58 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89449 ']' 00:19:37.581 10:47:58 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89449 00:19:37.581 10:47:58 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:37.581 ************************************ 00:19:37.581 END TEST bdev_raid 00:19:37.581 ************************************ 00:19:37.581 00:19:37.581 real 12m56.507s 00:19:37.581 user 18m17.875s 00:19:37.581 sys 1m43.145s 00:19:37.581 10:47:58 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.581 10:47:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.581 10:47:58 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:37.581 10:47:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:37.581 10:47:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.581 10:47:58 -- common/autotest_common.sh@10 -- # set +x 00:19:37.581 ************************************ 00:19:37.581 START TEST spdkcli_raid 00:19:37.581 ************************************ 00:19:37.581 10:47:58 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:37.581 * Looking for test storage... 00:19:37.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:37.581 10:47:58 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:37.581 10:47:58 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:37.581 10:47:58 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:37.581 10:47:58 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:37.581 10:47:58 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:37.581 10:47:58 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:37.581 10:47:58 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:37.581 10:47:58 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:37.581 10:47:58 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:37.582 10:47:58 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:37.582 10:47:58 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:37.582 10:47:58 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:37.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.582 --rc genhtml_branch_coverage=1 00:19:37.582 --rc genhtml_function_coverage=1 00:19:37.582 --rc genhtml_legend=1 00:19:37.582 --rc geninfo_all_blocks=1 00:19:37.582 --rc geninfo_unexecuted_blocks=1 00:19:37.582 00:19:37.582 ' 00:19:37.582 10:47:58 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:37.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.582 --rc genhtml_branch_coverage=1 00:19:37.582 --rc genhtml_function_coverage=1 00:19:37.582 --rc genhtml_legend=1 00:19:37.582 --rc geninfo_all_blocks=1 00:19:37.582 --rc geninfo_unexecuted_blocks=1 00:19:37.582 00:19:37.582 ' 00:19:37.582 10:47:58 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:37.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.582 --rc genhtml_branch_coverage=1 00:19:37.582 --rc genhtml_function_coverage=1 00:19:37.582 --rc genhtml_legend=1 00:19:37.582 --rc geninfo_all_blocks=1 00:19:37.582 --rc geninfo_unexecuted_blocks=1 00:19:37.582 00:19:37.582 ' 00:19:37.582 10:47:58 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:37.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.582 --rc genhtml_branch_coverage=1 00:19:37.582 --rc genhtml_function_coverage=1 00:19:37.582 --rc genhtml_legend=1 00:19:37.582 --rc geninfo_all_blocks=1 00:19:37.582 --rc geninfo_unexecuted_blocks=1 00:19:37.582 00:19:37.582 ' 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:37.582 10:47:58 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:37.582 10:47:58 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.582 10:47:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90137 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90137 00:19:37.582 10:47:58 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90137 ']' 00:19:37.582 10:47:58 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:37.582 10:47:58 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.582 10:47:58 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.582 10:47:58 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.582 10:47:58 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.582 10:47:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.841 [2024-11-15 10:47:58.794548] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:19:37.841 [2024-11-15 10:47:58.794740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90137 ] 00:19:37.841 [2024-11-15 10:47:58.981463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:38.100 [2024-11-15 10:47:59.112436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.100 [2024-11-15 10:47:59.112469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.037 10:47:59 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.037 10:47:59 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:39.037 10:47:59 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:39.037 10:47:59 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.037 10:47:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.037 10:48:00 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:39.037 10:48:00 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.037 10:48:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.037 10:48:00 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:39.037 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:39.037 ' 00:19:40.424 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:40.424 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:40.695 10:48:01 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:40.695 10:48:01 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.695 10:48:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.695 10:48:01 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:40.695 10:48:01 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.695 10:48:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.695 10:48:01 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:40.695 ' 00:19:42.069 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:42.069 10:48:02 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:42.069 10:48:02 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.069 10:48:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.069 10:48:02 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:42.069 10:48:02 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.069 10:48:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.069 10:48:02 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:42.069 10:48:02 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:42.635 10:48:03 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:42.635 10:48:03 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:42.635 10:48:03 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:42.635 10:48:03 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.635 10:48:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.635 10:48:03 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:42.635 10:48:03 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.635 10:48:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.635 10:48:03 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:42.635 ' 00:19:43.569 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:43.827 10:48:04 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:43.827 10:48:04 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:43.827 10:48:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:43.827 10:48:04 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:43.827 10:48:04 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:43.827 10:48:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:43.827 10:48:04 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:43.827 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:43.827 ' 00:19:45.203 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:45.203 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:45.461 10:48:06 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:45.461 10:48:06 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:45.461 10:48:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:45.461 10:48:06 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90137 00:19:45.461 10:48:06 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90137 ']' 00:19:45.461 10:48:06 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90137 00:19:45.461 10:48:06 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:45.461 10:48:06 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.461 10:48:06 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90137 00:19:45.461 killing process with pid 90137 00:19:45.461 10:48:06 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:45.461 10:48:06 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:45.461 10:48:06 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90137' 00:19:45.461 10:48:06 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90137 00:19:45.461 10:48:06 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90137 00:19:47.993 10:48:08 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:47.993 10:48:08 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90137 ']' 00:19:47.993 10:48:08 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90137 00:19:47.993 10:48:08 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90137 ']' 00:19:47.993 10:48:08 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90137 00:19:47.993 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90137) - No such process 00:19:47.993 Process with pid 90137 is not found 00:19:47.993 10:48:08 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90137 is not found' 00:19:47.993 10:48:08 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:47.993 10:48:08 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:47.993 10:48:08 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:47.993 10:48:08 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:47.993 00:19:47.993 real 0m10.375s 00:19:47.993 user 0m21.369s 00:19:47.993 sys 0m1.252s 00:19:47.993 ************************************ 00:19:47.993 END TEST spdkcli_raid 00:19:47.993 ************************************ 00:19:47.993 10:48:08 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.993 10:48:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:47.993 10:48:08 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:47.993 10:48:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:47.993 10:48:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.993 10:48:08 -- common/autotest_common.sh@10 -- # set +x 00:19:47.993 ************************************ 00:19:47.993 START TEST blockdev_raid5f 00:19:47.993 ************************************ 00:19:47.993 10:48:08 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:47.993 * Looking for test storage... 00:19:47.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:47.993 10:48:08 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:47.993 10:48:08 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:19:47.993 10:48:08 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:47.993 10:48:09 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:47.993 10:48:09 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.994 10:48:09 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:47.994 10:48:09 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:47.994 10:48:09 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.994 10:48:09 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:47.994 10:48:09 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.994 10:48:09 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.994 10:48:09 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.994 10:48:09 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:47.994 10:48:09 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.994 10:48:09 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:47.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.994 --rc genhtml_branch_coverage=1 00:19:47.994 --rc genhtml_function_coverage=1 00:19:47.994 --rc genhtml_legend=1 00:19:47.994 --rc geninfo_all_blocks=1 00:19:47.994 --rc geninfo_unexecuted_blocks=1 00:19:47.994 00:19:47.994 ' 00:19:47.994 10:48:09 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:47.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.994 --rc genhtml_branch_coverage=1 00:19:47.994 --rc genhtml_function_coverage=1 00:19:47.994 --rc genhtml_legend=1 00:19:47.994 --rc geninfo_all_blocks=1 00:19:47.994 --rc geninfo_unexecuted_blocks=1 00:19:47.994 00:19:47.994 ' 00:19:47.994 10:48:09 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:47.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.994 --rc genhtml_branch_coverage=1 00:19:47.994 --rc genhtml_function_coverage=1 00:19:47.994 --rc genhtml_legend=1 00:19:47.994 --rc geninfo_all_blocks=1 00:19:47.994 --rc geninfo_unexecuted_blocks=1 00:19:47.994 00:19:47.994 ' 00:19:47.994 10:48:09 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:47.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.994 --rc genhtml_branch_coverage=1 00:19:47.994 --rc genhtml_function_coverage=1 00:19:47.994 --rc genhtml_legend=1 00:19:47.994 --rc geninfo_all_blocks=1 00:19:47.994 --rc geninfo_unexecuted_blocks=1 00:19:47.994 00:19:47.994 ' 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:47.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90413 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90413 00:19:47.994 10:48:09 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90413 ']' 00:19:47.994 10:48:09 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:47.994 10:48:09 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.994 10:48:09 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.994 10:48:09 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.994 10:48:09 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.994 10:48:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:48.252 [2024-11-15 10:48:09.215197] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:19:48.252 [2024-11-15 10:48:09.215669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90413 ] 00:19:48.252 [2024-11-15 10:48:09.402024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.519 [2024-11-15 10:48:09.541435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.468 10:48:10 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.469 10:48:10 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:49.469 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:49.469 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:49.469 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:49.469 10:48:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.469 10:48:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:49.469 Malloc0 00:19:49.469 Malloc1 00:19:49.469 Malloc2 00:19:49.469 10:48:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.469 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:49.469 10:48:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.469 10:48:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:49.469 10:48:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.469 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:49.469 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:49.469 10:48:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.469 10:48:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:49.727 10:48:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.727 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:49.727 10:48:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.727 10:48:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:49.727 10:48:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.727 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:49.727 10:48:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.727 10:48:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:49.727 10:48:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.727 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:49.727 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:49.727 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:49.727 10:48:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.727 10:48:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:49.727 10:48:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.727 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:49.728 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "656cb87d-25b6-4884-a439-aa2cb246399b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "656cb87d-25b6-4884-a439-aa2cb246399b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "656cb87d-25b6-4884-a439-aa2cb246399b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "6d2e18eb-057f-4574-8500-eb88c636e7a0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e8e6da53-2f5b-4c8e-a8cf-2f49466b12e3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "2cc82455-282a-4c4c-a825-46e61025e7d0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:49.728 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:49.728 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:49.728 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:49.728 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:49.728 10:48:10 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90413 00:19:49.728 10:48:10 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90413 ']' 00:19:49.728 10:48:10 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90413 00:19:49.728 10:48:10 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:49.728 10:48:10 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.728 10:48:10 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90413 00:19:49.728 killing process with pid 90413 00:19:49.728 10:48:10 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.728 10:48:10 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.728 10:48:10 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90413' 00:19:49.728 10:48:10 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90413 00:19:49.728 10:48:10 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90413 00:19:52.259 10:48:13 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:52.259 10:48:13 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:52.259 10:48:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:52.259 10:48:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.259 10:48:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:52.259 ************************************ 00:19:52.259 START TEST bdev_hello_world 00:19:52.259 ************************************ 00:19:52.259 10:48:13 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:52.517 [2024-11-15 10:48:13.449391] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:19:52.517 [2024-11-15 10:48:13.449604] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90480 ] 00:19:52.517 [2024-11-15 10:48:13.633354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.775 [2024-11-15 10:48:13.766143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.344 [2024-11-15 10:48:14.273330] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:53.344 [2024-11-15 10:48:14.273382] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:53.344 [2024-11-15 10:48:14.273421] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:53.344 [2024-11-15 10:48:14.274044] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:53.344 [2024-11-15 10:48:14.274221] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:53.344 [2024-11-15 10:48:14.274268] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:53.344 [2024-11-15 10:48:14.274359] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:53.344 00:19:53.344 [2024-11-15 10:48:14.274388] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:54.719 00:19:54.719 real 0m2.102s 00:19:54.719 user 0m1.674s 00:19:54.719 sys 0m0.304s 00:19:54.719 ************************************ 00:19:54.719 END TEST bdev_hello_world 00:19:54.719 ************************************ 00:19:54.719 10:48:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.719 10:48:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:54.719 10:48:15 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:54.719 10:48:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:54.719 10:48:15 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.719 10:48:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:54.719 ************************************ 00:19:54.719 START TEST bdev_bounds 00:19:54.719 ************************************ 00:19:54.719 Process bdevio pid: 90521 00:19:54.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.719 10:48:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:54.719 10:48:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90521 00:19:54.719 10:48:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:54.719 10:48:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:54.719 10:48:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90521' 00:19:54.719 10:48:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90521 00:19:54.719 10:48:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90521 ']' 00:19:54.719 10:48:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.719 10:48:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.719 10:48:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.719 10:48:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.719 10:48:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:54.719 [2024-11-15 10:48:15.639363] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:19:54.719 [2024-11-15 10:48:15.639798] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90521 ] 00:19:54.719 [2024-11-15 10:48:15.824670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:54.978 [2024-11-15 10:48:15.964007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.978 [2024-11-15 10:48:15.964150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.978 [2024-11-15 10:48:15.964163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.545 10:48:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.545 10:48:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:55.545 10:48:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:55.803 I/O targets: 00:19:55.803 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:55.803 00:19:55.803 00:19:55.803 CUnit - A unit testing framework for C - Version 2.1-3 00:19:55.803 http://cunit.sourceforge.net/ 00:19:55.803 00:19:55.803 00:19:55.803 Suite: bdevio tests on: raid5f 00:19:55.803 Test: blockdev write read block ...passed 00:19:55.803 Test: blockdev write zeroes read block ...passed 00:19:55.803 Test: blockdev write zeroes read no split ...passed 00:19:56.062 Test: blockdev write zeroes read split ...passed 00:19:56.062 Test: blockdev write zeroes read split partial ...passed 00:19:56.062 Test: blockdev reset ...passed 00:19:56.062 Test: blockdev write read 8 blocks ...passed 00:19:56.062 Test: blockdev write read size > 128k ...passed 00:19:56.062 Test: blockdev write read invalid size ...passed 00:19:56.062 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:56.062 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:56.062 Test: blockdev write read max offset ...passed 00:19:56.062 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:56.062 Test: blockdev writev readv 8 blocks ...passed 00:19:56.062 Test: blockdev writev readv 30 x 1block ...passed 00:19:56.062 Test: blockdev writev readv block ...passed 00:19:56.062 Test: blockdev writev readv size > 128k ...passed 00:19:56.062 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:56.062 Test: blockdev comparev and writev ...passed 00:19:56.062 Test: blockdev nvme passthru rw ...passed 00:19:56.062 Test: blockdev nvme passthru vendor specific ...passed 00:19:56.062 Test: blockdev nvme admin passthru ...passed 00:19:56.062 Test: blockdev copy ...passed 00:19:56.062 00:19:56.062 Run Summary: Type Total Ran Passed Failed Inactive 00:19:56.062 suites 1 1 n/a 0 0 00:19:56.062 tests 23 23 23 0 0 00:19:56.062 asserts 130 130 130 0 n/a 00:19:56.062 00:19:56.062 Elapsed time = 0.581 seconds 00:19:56.062 0 00:19:56.062 10:48:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90521 00:19:56.062 10:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90521 ']' 00:19:56.062 10:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90521 00:19:56.062 10:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:56.062 10:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.062 10:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90521 00:19:56.062 10:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:56.062 10:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:56.062 10:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90521' 00:19:56.062 killing process with pid 90521 00:19:56.062 10:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90521 00:19:56.062 10:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90521 00:19:57.439 10:48:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:57.439 ************************************ 00:19:57.439 END TEST bdev_bounds 00:19:57.439 ************************************ 00:19:57.439 00:19:57.439 real 0m3.042s 00:19:57.439 user 0m7.609s 00:19:57.439 sys 0m0.489s 00:19:57.439 10:48:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.439 10:48:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:57.439 10:48:18 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:57.439 10:48:18 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:57.439 10:48:18 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.439 10:48:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:57.699 ************************************ 00:19:57.699 START TEST bdev_nbd 00:19:57.699 ************************************ 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:57.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90583 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90583 /var/tmp/spdk-nbd.sock 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90583 ']' 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.699 10:48:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:57.699 [2024-11-15 10:48:18.716330] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:19:57.699 [2024-11-15 10:48:18.716828] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.959 [2024-11-15 10:48:18.899280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.959 [2024-11-15 10:48:19.036605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:58.527 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:59.094 1+0 records in 00:19:59.094 1+0 records out 00:19:59.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283893 s, 14.4 MB/s 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:59.094 10:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:59.353 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:59.353 { 00:19:59.353 "nbd_device": "/dev/nbd0", 00:19:59.353 "bdev_name": "raid5f" 00:19:59.353 } 00:19:59.353 ]' 00:19:59.353 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:59.353 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:59.353 { 00:19:59.353 "nbd_device": "/dev/nbd0", 00:19:59.353 "bdev_name": "raid5f" 00:19:59.353 } 00:19:59.353 ]' 00:19:59.353 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:59.353 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:59.353 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:59.353 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:59.353 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:59.353 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:59.353 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.353 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:59.612 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:59.612 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:59.612 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:59.612 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.612 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.612 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:59.612 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:59.612 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.612 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:59.612 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:59.612 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:59.871 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:59.871 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:59.871 10:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:59.871 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:00.439 /dev/nbd0 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:00.439 1+0 records in 00:20:00.439 1+0 records out 00:20:00.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396994 s, 10.3 MB/s 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:00.439 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:00.697 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:00.697 { 00:20:00.697 "nbd_device": "/dev/nbd0", 00:20:00.697 "bdev_name": "raid5f" 00:20:00.698 } 00:20:00.698 ]' 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:00.698 { 00:20:00.698 "nbd_device": "/dev/nbd0", 00:20:00.698 "bdev_name": "raid5f" 00:20:00.698 } 00:20:00.698 ]' 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:00.698 256+0 records in 00:20:00.698 256+0 records out 00:20:00.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00739688 s, 142 MB/s 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:00.698 256+0 records in 00:20:00.698 256+0 records out 00:20:00.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0399053 s, 26.3 MB/s 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:00.698 10:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:00.956 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:00.957 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:00.957 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:00.957 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:00.957 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:00.957 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:00.957 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:00.957 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:00.957 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:00.957 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:00.957 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:01.215 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:01.215 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:01.215 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:01.474 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:01.474 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:01.474 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:01.474 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:01.474 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:01.474 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:01.474 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:01.474 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:01.474 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:01.474 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:01.474 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:01.474 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:01.474 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:01.733 malloc_lvol_verify 00:20:01.733 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:01.733 fa993a32-ac82-4e24-b406-0099ca6b873c 00:20:01.992 10:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:01.992 bc83e417-f41b-49b9-874e-94ebee08b3b0 00:20:02.250 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:02.250 /dev/nbd0 00:20:02.250 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:02.250 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:02.250 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:02.250 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:02.250 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:02.509 mke2fs 1.47.0 (5-Feb-2023) 00:20:02.509 Discarding device blocks: 0/4096 done 00:20:02.509 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:02.509 00:20:02.509 Allocating group tables: 0/1 done 00:20:02.509 Writing inode tables: 0/1 done 00:20:02.509 Creating journal (1024 blocks): done 00:20:02.509 Writing superblocks and filesystem accounting information: 0/1 done 00:20:02.509 00:20:02.509 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:02.509 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:02.509 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:02.509 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:02.509 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:02.509 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:02.509 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90583 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90583 ']' 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90583 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90583 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:02.768 killing process with pid 90583 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90583' 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90583 00:20:02.768 10:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90583 00:20:04.156 10:48:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:04.156 00:20:04.156 real 0m6.471s 00:20:04.156 user 0m9.227s 00:20:04.156 sys 0m1.441s 00:20:04.156 ************************************ 00:20:04.156 END TEST bdev_nbd 00:20:04.156 ************************************ 00:20:04.156 10:48:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.156 10:48:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:04.156 10:48:25 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:04.156 10:48:25 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:20:04.156 10:48:25 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:20:04.156 10:48:25 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:20:04.156 10:48:25 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:04.156 10:48:25 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.156 10:48:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:04.156 ************************************ 00:20:04.156 START TEST bdev_fio 00:20:04.156 ************************************ 00:20:04.156 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:04.156 10:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:04.156 10:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:04.156 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:04.156 10:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:04.156 10:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:04.157 ************************************ 00:20:04.157 START TEST bdev_fio_rw_verify 00:20:04.157 ************************************ 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:04.157 10:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:04.416 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:04.416 fio-3.35 00:20:04.416 Starting 1 thread 00:20:16.621 00:20:16.621 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90792: Fri Nov 15 10:48:36 2024 00:20:16.621 read: IOPS=8447, BW=33.0MiB/s (34.6MB/s)(330MiB/10001msec) 00:20:16.621 slat (usec): min=20, max=124, avg=28.74, stdev= 8.30 00:20:16.621 clat (usec): min=13, max=724, avg=187.56, stdev=72.89 00:20:16.621 lat (usec): min=39, max=806, avg=216.30, stdev=74.38 00:20:16.621 clat percentiles (usec): 00:20:16.621 | 50.000th=[ 186], 99.000th=[ 347], 99.900th=[ 416], 99.990th=[ 693], 00:20:16.621 | 99.999th=[ 725] 00:20:16.621 write: IOPS=8858, BW=34.6MiB/s (36.3MB/s)(341MiB/9869msec); 0 zone resets 00:20:16.621 slat (usec): min=9, max=223, avg=23.72, stdev= 8.01 00:20:16.621 clat (usec): min=68, max=1341, avg=435.00, stdev=67.07 00:20:16.621 lat (usec): min=87, max=1507, avg=458.72, stdev=68.82 00:20:16.621 clat percentiles (usec): 00:20:16.621 | 50.000th=[ 437], 99.000th=[ 603], 99.900th=[ 676], 99.990th=[ 1139], 00:20:16.621 | 99.999th=[ 1336] 00:20:16.621 bw ( KiB/s): min=31792, max=38984, per=99.14%, avg=35127.58, stdev=1957.14, samples=19 00:20:16.621 iops : min= 7948, max= 9746, avg=8781.89, stdev=489.28, samples=19 00:20:16.621 lat (usec) : 20=0.01%, 50=0.01%, 100=6.51%, 250=31.76%, 500=53.97% 00:20:16.621 lat (usec) : 750=7.73%, 1000=0.02% 00:20:16.621 lat (msec) : 2=0.01% 00:20:16.621 cpu : usr=98.24%, sys=0.86%, ctx=30, majf=0, minf=7339 00:20:16.621 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:16.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.621 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.621 issued rwts: total=84487,87422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.621 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:16.621 00:20:16.621 Run status group 0 (all jobs): 00:20:16.621 READ: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=330MiB (346MB), run=10001-10001msec 00:20:16.621 WRITE: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=341MiB (358MB), run=9869-9869msec 00:20:17.189 ----------------------------------------------------- 00:20:17.189 Suppressions used: 00:20:17.189 count bytes template 00:20:17.189 1 7 /usr/src/fio/parse.c 00:20:17.189 322 30912 /usr/src/fio/iolog.c 00:20:17.189 1 8 libtcmalloc_minimal.so 00:20:17.189 1 904 libcrypto.so 00:20:17.189 ----------------------------------------------------- 00:20:17.189 00:20:17.189 00:20:17.189 real 0m12.973s 00:20:17.189 user 0m13.319s 00:20:17.189 sys 0m0.863s 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:17.189 ************************************ 00:20:17.189 END TEST bdev_fio_rw_verify 00:20:17.189 ************************************ 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:17.189 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:17.190 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:17.190 10:48:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "656cb87d-25b6-4884-a439-aa2cb246399b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "656cb87d-25b6-4884-a439-aa2cb246399b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "656cb87d-25b6-4884-a439-aa2cb246399b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "6d2e18eb-057f-4574-8500-eb88c636e7a0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e8e6da53-2f5b-4c8e-a8cf-2f49466b12e3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "2cc82455-282a-4c4c-a825-46e61025e7d0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:17.190 10:48:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:17.190 10:48:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:17.190 10:48:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:17.190 /home/vagrant/spdk_repo/spdk 00:20:17.190 10:48:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:17.190 10:48:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:17.190 10:48:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:17.190 00:20:17.190 real 0m13.205s 00:20:17.190 user 0m13.420s 00:20:17.190 sys 0m0.960s 00:20:17.190 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:17.190 10:48:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:17.190 ************************************ 00:20:17.190 END TEST bdev_fio 00:20:17.190 ************************************ 00:20:17.448 10:48:38 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:17.448 10:48:38 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:17.448 10:48:38 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:17.448 10:48:38 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:17.448 10:48:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:17.449 ************************************ 00:20:17.449 START TEST bdev_verify 00:20:17.449 ************************************ 00:20:17.449 10:48:38 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:17.449 [2024-11-15 10:48:38.470101] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:20:17.449 [2024-11-15 10:48:38.470268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90952 ] 00:20:17.707 [2024-11-15 10:48:38.653751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:17.707 [2024-11-15 10:48:38.822622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.707 [2024-11-15 10:48:38.822623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.274 Running I/O for 5 seconds... 00:20:20.586 11966.00 IOPS, 46.74 MiB/s [2024-11-15T10:48:42.684Z] 11576.50 IOPS, 45.22 MiB/s [2024-11-15T10:48:43.622Z] 11409.00 IOPS, 44.57 MiB/s [2024-11-15T10:48:44.558Z] 11268.75 IOPS, 44.02 MiB/s [2024-11-15T10:48:44.558Z] 11681.80 IOPS, 45.63 MiB/s 00:20:23.396 Latency(us) 00:20:23.396 [2024-11-15T10:48:44.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.396 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:23.396 Verification LBA range: start 0x0 length 0x2000 00:20:23.396 raid5f : 5.02 5826.32 22.76 0.00 0.00 33107.92 131.26 25976.09 00:20:23.396 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:23.396 Verification LBA range: start 0x2000 length 0x2000 00:20:23.396 raid5f : 5.02 5835.32 22.79 0.00 0.00 32905.64 268.10 26095.24 00:20:23.396 [2024-11-15T10:48:44.558Z] =================================================================================================================== 00:20:23.396 [2024-11-15T10:48:44.558Z] Total : 11661.64 45.55 0.00 0.00 33006.77 131.26 26095.24 00:20:24.775 00:20:24.775 real 0m7.405s 00:20:24.775 user 0m13.509s 00:20:24.775 sys 0m0.344s 00:20:24.775 10:48:45 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.775 ************************************ 00:20:24.775 END TEST bdev_verify 00:20:24.775 ************************************ 00:20:24.775 10:48:45 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:24.775 10:48:45 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:24.775 10:48:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:24.775 10:48:45 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.775 10:48:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:24.775 ************************************ 00:20:24.775 START TEST bdev_verify_big_io 00:20:24.775 ************************************ 00:20:24.775 10:48:45 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:25.034 [2024-11-15 10:48:45.943988] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:20:25.034 [2024-11-15 10:48:45.944177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91049 ] 00:20:25.034 [2024-11-15 10:48:46.132803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:25.293 [2024-11-15 10:48:46.272459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.293 [2024-11-15 10:48:46.272472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.862 Running I/O for 5 seconds... 00:20:28.179 506.00 IOPS, 31.62 MiB/s [2024-11-15T10:48:50.281Z] 696.00 IOPS, 43.50 MiB/s [2024-11-15T10:48:51.218Z] 760.00 IOPS, 47.50 MiB/s [2024-11-15T10:48:52.171Z] 761.00 IOPS, 47.56 MiB/s [2024-11-15T10:48:52.171Z] 761.80 IOPS, 47.61 MiB/s 00:20:31.009 Latency(us) 00:20:31.009 [2024-11-15T10:48:52.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.009 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:31.009 Verification LBA range: start 0x0 length 0x200 00:20:31.009 raid5f : 5.21 377.57 23.60 0.00 0.00 8256533.15 209.45 354609.34 00:20:31.009 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:31.009 Verification LBA range: start 0x200 length 0x200 00:20:31.009 raid5f : 5.12 371.97 23.25 0.00 0.00 8495075.22 177.80 358422.34 00:20:31.009 [2024-11-15T10:48:52.171Z] =================================================================================================================== 00:20:31.009 [2024-11-15T10:48:52.171Z] Total : 749.54 46.85 0.00 0.00 8373894.37 177.80 358422.34 00:20:32.389 00:20:32.389 real 0m7.615s 00:20:32.389 user 0m13.912s 00:20:32.389 sys 0m0.373s 00:20:32.389 10:48:53 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.389 ************************************ 00:20:32.389 END TEST bdev_verify_big_io 00:20:32.389 ************************************ 00:20:32.389 10:48:53 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.389 10:48:53 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:32.389 10:48:53 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:32.389 10:48:53 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.389 10:48:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:32.389 ************************************ 00:20:32.389 START TEST bdev_write_zeroes 00:20:32.389 ************************************ 00:20:32.389 10:48:53 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:32.649 [2024-11-15 10:48:53.593338] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:20:32.649 [2024-11-15 10:48:53.593550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91142 ] 00:20:32.649 [2024-11-15 10:48:53.773485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.908 [2024-11-15 10:48:53.912517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.477 Running I/O for 1 seconds... 00:20:34.423 19431.00 IOPS, 75.90 MiB/s 00:20:34.423 Latency(us) 00:20:34.423 [2024-11-15T10:48:55.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.423 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:34.423 raid5f : 1.01 19412.67 75.83 0.00 0.00 6567.78 2010.76 8519.68 00:20:34.423 [2024-11-15T10:48:55.585Z] =================================================================================================================== 00:20:34.423 [2024-11-15T10:48:55.585Z] Total : 19412.67 75.83 0.00 0.00 6567.78 2010.76 8519.68 00:20:35.801 00:20:35.801 real 0m3.292s 00:20:35.801 user 0m2.842s 00:20:35.801 sys 0m0.315s 00:20:35.801 10:48:56 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.801 10:48:56 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:35.801 ************************************ 00:20:35.801 END TEST bdev_write_zeroes 00:20:35.801 ************************************ 00:20:35.801 10:48:56 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:35.801 10:48:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:35.801 10:48:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.801 10:48:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:35.801 ************************************ 00:20:35.801 START TEST bdev_json_nonenclosed 00:20:35.801 ************************************ 00:20:35.801 10:48:56 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:35.801 [2024-11-15 10:48:56.932725] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:20:35.801 [2024-11-15 10:48:56.932874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91201 ] 00:20:36.083 [2024-11-15 10:48:57.109145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.343 [2024-11-15 10:48:57.243187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.343 [2024-11-15 10:48:57.243352] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:36.343 [2024-11-15 10:48:57.243392] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:36.343 [2024-11-15 10:48:57.243414] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:36.343 00:20:36.343 real 0m0.662s 00:20:36.343 user 0m0.423s 00:20:36.343 sys 0m0.135s 00:20:36.343 10:48:57 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.343 ************************************ 00:20:36.343 END TEST bdev_json_nonenclosed 00:20:36.343 10:48:57 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:36.602 ************************************ 00:20:36.602 10:48:57 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:36.602 10:48:57 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:36.602 10:48:57 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.602 10:48:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:36.602 ************************************ 00:20:36.602 START TEST bdev_json_nonarray 00:20:36.602 ************************************ 00:20:36.602 10:48:57 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:36.602 [2024-11-15 10:48:57.665478] Starting SPDK v25.01-pre git sha1 e081e4a1a / DPDK 24.03.0 initialization... 00:20:36.602 [2024-11-15 10:48:57.665683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91225 ] 00:20:36.861 [2024-11-15 10:48:57.853710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.861 [2024-11-15 10:48:57.987668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.861 [2024-11-15 10:48:57.987834] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:36.861 [2024-11-15 10:48:57.987863] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:36.861 [2024-11-15 10:48:57.987887] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:37.120 00:20:37.120 real 0m0.687s 00:20:37.120 user 0m0.431s 00:20:37.120 sys 0m0.150s 00:20:37.120 10:48:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.120 ************************************ 00:20:37.120 10:48:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:37.120 END TEST bdev_json_nonarray 00:20:37.120 ************************************ 00:20:37.379 10:48:58 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:37.379 10:48:58 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:37.379 10:48:58 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:37.379 10:48:58 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:37.379 10:48:58 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:37.379 10:48:58 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:37.379 10:48:58 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:37.379 10:48:58 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:37.379 10:48:58 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:37.379 10:48:58 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:37.379 10:48:58 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:37.379 00:20:37.379 real 0m49.404s 00:20:37.379 user 1m7.520s 00:20:37.379 sys 0m5.575s 00:20:37.379 10:48:58 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.379 10:48:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:37.379 ************************************ 00:20:37.379 END TEST blockdev_raid5f 00:20:37.379 ************************************ 00:20:37.379 10:48:58 -- spdk/autotest.sh@194 -- # uname -s 00:20:37.379 10:48:58 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:37.379 10:48:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:37.379 10:48:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:37.379 10:48:58 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:37.379 10:48:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.379 10:48:58 -- common/autotest_common.sh@10 -- # set +x 00:20:37.379 10:48:58 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:37.379 10:48:58 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:37.379 10:48:58 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:37.379 10:48:58 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:37.379 10:48:58 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:37.379 10:48:58 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:37.380 10:48:58 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:37.380 10:48:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.380 10:48:58 -- common/autotest_common.sh@10 -- # set +x 00:20:37.380 10:48:58 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:37.380 10:48:58 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:37.380 10:48:58 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:37.380 10:48:58 -- common/autotest_common.sh@10 -- # set +x 00:20:39.285 INFO: APP EXITING 00:20:39.285 INFO: killing all VMs 00:20:39.285 INFO: killing vhost app 00:20:39.285 INFO: EXIT DONE 00:20:39.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:39.285 Waiting for block devices as requested 00:20:39.543 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:39.543 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:40.109 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:40.367 Cleaning 00:20:40.367 Removing: /var/run/dpdk/spdk0/config 00:20:40.367 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:40.367 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:40.367 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:40.367 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:40.367 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:40.367 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:40.367 Removing: /dev/shm/spdk_tgt_trace.pid56830 00:20:40.367 Removing: /var/run/dpdk/spdk0 00:20:40.367 Removing: /var/run/dpdk/spdk_pid56595 00:20:40.367 Removing: /var/run/dpdk/spdk_pid56830 00:20:40.367 Removing: /var/run/dpdk/spdk_pid57059 00:20:40.367 Removing: /var/run/dpdk/spdk_pid57169 00:20:40.367 Removing: /var/run/dpdk/spdk_pid57219 00:20:40.367 Removing: /var/run/dpdk/spdk_pid57353 00:20:40.367 Removing: /var/run/dpdk/spdk_pid57371 00:20:40.367 Removing: /var/run/dpdk/spdk_pid57581 00:20:40.367 Removing: /var/run/dpdk/spdk_pid57687 00:20:40.367 Removing: /var/run/dpdk/spdk_pid57794 00:20:40.367 Removing: /var/run/dpdk/spdk_pid57916 00:20:40.367 Removing: /var/run/dpdk/spdk_pid58030 00:20:40.367 Removing: /var/run/dpdk/spdk_pid58069 00:20:40.367 Removing: /var/run/dpdk/spdk_pid58106 00:20:40.367 Removing: /var/run/dpdk/spdk_pid58182 00:20:40.367 Removing: /var/run/dpdk/spdk_pid58293 00:20:40.367 Removing: /var/run/dpdk/spdk_pid58768 00:20:40.367 Removing: /var/run/dpdk/spdk_pid58843 00:20:40.367 Removing: /var/run/dpdk/spdk_pid58919 00:20:40.367 Removing: /var/run/dpdk/spdk_pid58935 00:20:40.367 Removing: /var/run/dpdk/spdk_pid59092 00:20:40.367 Removing: /var/run/dpdk/spdk_pid59113 00:20:40.367 Removing: /var/run/dpdk/spdk_pid59263 00:20:40.367 Removing: /var/run/dpdk/spdk_pid59285 00:20:40.367 Removing: /var/run/dpdk/spdk_pid59354 00:20:40.367 Removing: /var/run/dpdk/spdk_pid59372 00:20:40.367 Removing: /var/run/dpdk/spdk_pid59442 00:20:40.367 Removing: /var/run/dpdk/spdk_pid59460 00:20:40.367 Removing: /var/run/dpdk/spdk_pid59655 00:20:40.367 Removing: /var/run/dpdk/spdk_pid59697 00:20:40.367 Removing: /var/run/dpdk/spdk_pid59775 00:20:40.367 Removing: /var/run/dpdk/spdk_pid61151 00:20:40.367 Removing: /var/run/dpdk/spdk_pid61363 00:20:40.367 Removing: /var/run/dpdk/spdk_pid61503 00:20:40.367 Removing: /var/run/dpdk/spdk_pid62157 00:20:40.367 Removing: /var/run/dpdk/spdk_pid62374 00:20:40.367 Removing: /var/run/dpdk/spdk_pid62520 00:20:40.367 Removing: /var/run/dpdk/spdk_pid63174 00:20:40.367 Removing: /var/run/dpdk/spdk_pid63510 00:20:40.367 Removing: /var/run/dpdk/spdk_pid63650 00:20:40.367 Removing: /var/run/dpdk/spdk_pid65063 00:20:40.367 Removing: /var/run/dpdk/spdk_pid65321 00:20:40.367 Removing: /var/run/dpdk/spdk_pid65467 00:20:40.367 Removing: /var/run/dpdk/spdk_pid66880 00:20:40.367 Removing: /var/run/dpdk/spdk_pid67133 00:20:40.367 Removing: /var/run/dpdk/spdk_pid67284 00:20:40.367 Removing: /var/run/dpdk/spdk_pid68697 00:20:40.367 Removing: /var/run/dpdk/spdk_pid69147 00:20:40.367 Removing: /var/run/dpdk/spdk_pid69294 00:20:40.367 Removing: /var/run/dpdk/spdk_pid70807 00:20:40.367 Removing: /var/run/dpdk/spdk_pid71071 00:20:40.367 Removing: /var/run/dpdk/spdk_pid71219 00:20:40.367 Removing: /var/run/dpdk/spdk_pid72734 00:20:40.367 Removing: /var/run/dpdk/spdk_pid72999 00:20:40.367 Removing: /var/run/dpdk/spdk_pid73150 00:20:40.367 Removing: /var/run/dpdk/spdk_pid74655 00:20:40.367 Removing: /var/run/dpdk/spdk_pid75154 00:20:40.367 Removing: /var/run/dpdk/spdk_pid75298 00:20:40.367 Removing: /var/run/dpdk/spdk_pid75443 00:20:40.367 Removing: /var/run/dpdk/spdk_pid75889 00:20:40.367 Removing: /var/run/dpdk/spdk_pid76654 00:20:40.367 Removing: /var/run/dpdk/spdk_pid77036 00:20:40.367 Removing: /var/run/dpdk/spdk_pid77737 00:20:40.367 Removing: /var/run/dpdk/spdk_pid78223 00:20:40.367 Removing: /var/run/dpdk/spdk_pid79016 00:20:40.367 Removing: /var/run/dpdk/spdk_pid79437 00:20:40.625 Removing: /var/run/dpdk/spdk_pid81445 00:20:40.625 Removing: /var/run/dpdk/spdk_pid81902 00:20:40.625 Removing: /var/run/dpdk/spdk_pid82355 00:20:40.625 Removing: /var/run/dpdk/spdk_pid84474 00:20:40.625 Removing: /var/run/dpdk/spdk_pid84961 00:20:40.625 Removing: /var/run/dpdk/spdk_pid85470 00:20:40.625 Removing: /var/run/dpdk/spdk_pid86554 00:20:40.626 Removing: /var/run/dpdk/spdk_pid86877 00:20:40.626 Removing: /var/run/dpdk/spdk_pid87843 00:20:40.626 Removing: /var/run/dpdk/spdk_pid88166 00:20:40.626 Removing: /var/run/dpdk/spdk_pid89126 00:20:40.626 Removing: /var/run/dpdk/spdk_pid89449 00:20:40.626 Removing: /var/run/dpdk/spdk_pid90137 00:20:40.626 Removing: /var/run/dpdk/spdk_pid90413 00:20:40.626 Removing: /var/run/dpdk/spdk_pid90480 00:20:40.626 Removing: /var/run/dpdk/spdk_pid90521 00:20:40.626 Removing: /var/run/dpdk/spdk_pid90777 00:20:40.626 Removing: /var/run/dpdk/spdk_pid90952 00:20:40.626 Removing: /var/run/dpdk/spdk_pid91049 00:20:40.626 Removing: /var/run/dpdk/spdk_pid91142 00:20:40.626 Removing: /var/run/dpdk/spdk_pid91201 00:20:40.626 Removing: /var/run/dpdk/spdk_pid91225 00:20:40.626 Clean 00:20:40.626 10:49:01 -- common/autotest_common.sh@1453 -- # return 0 00:20:40.626 10:49:01 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:40.626 10:49:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.626 10:49:01 -- common/autotest_common.sh@10 -- # set +x 00:20:40.626 10:49:01 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:40.626 10:49:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.626 10:49:01 -- common/autotest_common.sh@10 -- # set +x 00:20:40.626 10:49:01 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:40.626 10:49:01 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:40.626 10:49:01 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:40.626 10:49:01 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:40.626 10:49:01 -- spdk/autotest.sh@398 -- # hostname 00:20:40.626 10:49:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:40.885 geninfo: WARNING: invalid characters removed from testname! 00:21:07.431 10:49:25 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:07.999 10:49:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:10.602 10:49:31 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:13.135 10:49:34 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:15.679 10:49:36 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:18.961 10:49:39 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:20.894 10:49:42 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:20.894 10:49:42 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:20.894 10:49:42 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:20.894 10:49:42 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:20.894 10:49:42 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:20.894 10:49:42 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:21.153 + [[ -n 5254 ]] 00:21:21.153 + sudo kill 5254 00:21:21.162 [Pipeline] } 00:21:21.178 [Pipeline] // timeout 00:21:21.192 [Pipeline] } 00:21:21.258 [Pipeline] // stage 00:21:21.263 [Pipeline] } 00:21:21.271 [Pipeline] // catchError 00:21:21.278 [Pipeline] stage 00:21:21.280 [Pipeline] { (Stop VM) 00:21:21.289 [Pipeline] sh 00:21:21.562 + vagrant halt 00:21:24.849 ==> default: Halting domain... 00:21:31.426 [Pipeline] sh 00:21:31.705 + vagrant destroy -f 00:21:34.992 ==> default: Removing domain... 00:21:35.005 [Pipeline] sh 00:21:35.288 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:21:35.297 [Pipeline] } 00:21:35.312 [Pipeline] // stage 00:21:35.318 [Pipeline] } 00:21:35.334 [Pipeline] // dir 00:21:35.339 [Pipeline] } 00:21:35.355 [Pipeline] // wrap 00:21:35.363 [Pipeline] } 00:21:35.377 [Pipeline] // catchError 00:21:35.387 [Pipeline] stage 00:21:35.390 [Pipeline] { (Epilogue) 00:21:35.403 [Pipeline] sh 00:21:35.687 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:40.986 [Pipeline] catchError 00:21:40.988 [Pipeline] { 00:21:41.002 [Pipeline] sh 00:21:41.284 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:41.543 Artifacts sizes are good 00:21:41.553 [Pipeline] } 00:21:41.568 [Pipeline] // catchError 00:21:41.583 [Pipeline] archiveArtifacts 00:21:41.591 Archiving artifacts 00:21:41.696 [Pipeline] cleanWs 00:21:41.709 [WS-CLEANUP] Deleting project workspace... 00:21:41.709 [WS-CLEANUP] Deferred wipeout is used... 00:21:41.714 [WS-CLEANUP] done 00:21:41.716 [Pipeline] } 00:21:41.730 [Pipeline] // stage 00:21:41.735 [Pipeline] } 00:21:41.749 [Pipeline] // node 00:21:41.755 [Pipeline] End of Pipeline 00:21:41.795 Finished: SUCCESS